Progetto Data Mining¶

About Dataset¶

Content¶

Questo dataset contiene circa 10 anni di osservazioni meteo gornaliere da tutta l'Australia. Contiene in particolare dati sulle giornate di pioggia e di sole di tutto il paese. Contiene 23 colonne.

  • Date: la data dell'osservazione
  • Location: il luogo della stazione meteo
  • MinTemp: temperatura minima
  • MaxTemp: temperatura massima
  • Rainfall: il quantitativo di pioggia caduta registrata in mm
  • Evaporation: Dato sull'evaporazione in mm
  • Sunshine: Numero di ore di luce nel giorno
  • WindGustDir: Direzione della folata di vento più forte
  • WindGustSpeed: Velocità della folata di vento più forte
  • WindGust3Dir: Direzione del vento alle 3
  • WindGust9Dir: Direzione del vento alle 9
  • WindSpeed9am: Velocità del vento alle 9
  • WindSpeed3am: Velocità del vento alle 3
  • Humidity3pm: Umidità alle 3
  • Humidity9pm: Umidità alle 9
  • Pressure9am: Pressione atmosferica alle 9
  • Pressure3pm: Pressione atmosferica alle 3
  • Cloud9am: Porzione di cielo oscurata dalle nuvole calcolata in "oktas" alle 9
  • Cloud3pm:Porzione di cielo oscurata dalle nuvole calcolata in "oktas" alle 3
  • Temp9am: Temperatura alle 9
  • Temp3pm: Temperatura alle 3
  • RainToday: Boolean: 1 se pioverà oggi con una quantità maggiore a 1mm di acqua, 0 altrimenti
  • RainTomorrow: Boolean: 1 se pioverà domani con una quantità maggiore a 1mm di acqua, 0 altrimenti

L'obbiettivo del progetto consiste nel fare predizione sull'attributo 'RainTomorrow'

In [1]:
from pandas import DataFrame, Series
from io import StringIO
import pandas as pd
import numpy as np
import os
import matplotlib
import matplotlib.pyplot as plt
import plotly.express as px
from IPython.display import Image
import seaborn as sb
import statistics as stat
import time
import math
import tensorflow as tf
from sklearn.metrics import *
from sklearn.metrics import *
from sklearn import metrics
from sklearn.model_selection import GridSearchCV
In [2]:
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split, GridSearchCV
from imblearn.over_sampling import RandomOverSampler
from sklearn.metrics import *
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.experimental import enable_halving_search_cv
from sklearn.model_selection import HalvingGridSearchCV
from sklearn.ensemble import VotingClassifier, AdaBoostClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.preprocessing import OrdinalEncoder
from imblearn.over_sampling import SMOTE
from sklearn.neural_network import MLPClassifier

from sklearn.ensemble import VotingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras.layers import Dense
In [3]:
def describe(a):
    if type(a) is np.ndarray:
        print("data:\n{}\nshape:{}\ndtype:{}\ntype: {}".format(a, a.shape, a.dtype, type(a)))
    elif type(a) is pd.Series:
        print("data:\n{}\nshape:{}\ndtype:{}\nname:{}\nindex-name:{}\ntype:{}".format(a, a.shape, a.dtype, a.name, a.index.name, type(a)))
    elif type(a) is pd.DataFrame:
        print("data:\n{}\nshape:{}\ntype:{}".format(a, a.shape,type(a)))
    else:
        print("{}, type:{}".format(a, type(a)))

Informazioni di base e data visualization¶

In [4]:
dataFrameWeather = pd.read_csv('./content/drive/MyDrive/weatherAUS.csv')
In [5]:
dataFrameWeather.shape
Out[5]:
(145460, 23)
In [6]:
dataFrameWeather.describe()
Out[6]:
MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustSpeed WindSpeed9am WindSpeed3pm Humidity9am Humidity3pm Pressure9am Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm
count 143975.000000 144199.000000 142199.000000 82670.000000 75625.000000 135197.000000 143693.000000 142398.000000 142806.000000 140953.000000 130395.00000 130432.000000 89572.000000 86102.000000 143693.000000 141851.00000
mean 12.194034 23.221348 2.360918 5.468232 7.611178 40.035230 14.043426 18.662657 68.880831 51.539116 1017.64994 1015.255889 4.447461 4.509930 16.990631 21.68339
std 6.398495 7.119049 8.478060 4.193704 3.785483 13.607062 8.915375 8.809800 19.029164 20.795902 7.10653 7.037414 2.887159 2.720357 6.488753 6.93665
min -8.500000 -4.800000 0.000000 0.000000 0.000000 6.000000 0.000000 0.000000 0.000000 0.000000 980.50000 977.100000 0.000000 0.000000 -7.200000 -5.40000
25% 7.600000 17.900000 0.000000 2.600000 4.800000 31.000000 7.000000 13.000000 57.000000 37.000000 1012.90000 1010.400000 1.000000 2.000000 12.300000 16.60000
50% 12.000000 22.600000 0.000000 4.800000 8.400000 39.000000 13.000000 19.000000 70.000000 52.000000 1017.60000 1015.200000 5.000000 5.000000 16.700000 21.10000
75% 16.900000 28.200000 0.800000 7.400000 10.600000 48.000000 19.000000 24.000000 83.000000 66.000000 1022.40000 1020.000000 7.000000 7.000000 21.600000 26.40000
max 33.900000 48.100000 371.000000 145.000000 14.500000 135.000000 130.000000 87.000000 100.000000 100.000000 1041.00000 1039.600000 9.000000 9.000000 40.200000 46.70000

Controllo se il dataset contiene valori duplicati

In [7]:
dataFrameWeather.duplicated().value_counts()
Out[7]:
False    145460
Name: count, dtype: int64
In [8]:
dataFrameWeather.sample(frac=1).head(30)
Out[8]:
Date Location MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustDir WindGustSpeed WindDir9am ... Humidity9am Humidity3pm Pressure9am Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm RainToday RainTomorrow
83256 2015-06-06 Dartmoor 10.9 17.2 0.0 0.2 5.6 N 37.0 NNW ... NaN NaN 1028.0 1025.0 NaN NaN 12.0 16.9 No No
20219 2015-01-08 NorahHead 21.4 29.6 0.0 NaN NaN NE 61.0 NNE ... 72.0 74.0 1016.1 1011.8 NaN NaN 24.2 25.2 No No
44673 2014-12-25 Wollongong 18.8 25.4 0.0 NaN NaN NNE 31.0 NE ... 88.0 87.0 1011.3 1007.3 8.0 8.0 20.5 21.6 No Yes
115998 2012-11-09 PearceRAAF 11.4 33.5 0.0 NaN 13.0 WSW 43.0 NNE ... 19.0 16.0 1016.9 1013.6 0.0 3.0 27.6 30.9 No No
5715 2016-07-27 BadgerysCreek 6.5 18.1 0.0 NaN NaN WNW 50.0 N ... 56.0 43.0 1012.3 1013.9 NaN NaN 15.1 17.1 No No
7981 2014-07-15 Cobar 6.8 17.7 0.0 2.4 NaN N 35.0 ENE ... 54.0 39.0 1020.2 1016.2 8.0 8.0 11.9 17.2 No No
62057 2011-06-25 Sale 3.8 18.5 0.0 1.0 6.8 NNE 19.0 WNW ... 86.0 54.0 1028.4 1025.4 2.0 2.0 8.4 17.4 No No
6396 2009-12-14 Cobar 18.9 35.9 0.0 12.8 12.4 WSW 39.0 ESE ... 23.0 11.0 1016.5 1014.0 1.0 1.0 27.1 34.5 No No
81073 2009-03-17 Dartmoor 11.1 15.9 0.2 3.0 0.0 SSW 31.0 SSW ... 75.0 74.0 1019.9 1019.7 NaN NaN 13.5 15.2 No No
48448 2015-11-29 Canberra 15.9 27.4 0.0 NaN NaN WSW 41.0 NW ... 64.0 25.0 1013.0 1009.7 8.0 NaN 19.4 25.5 No No
115504 2011-07-04 PearceRAAF 0.7 15.0 0.0 NaN 7.6 ESE 28.0 SW ... 93.0 46.0 1028.6 1027.2 1.0 3.0 5.8 14.6 No No
128266 2013-03-07 Walpole 16.4 23.4 1.0 NaN NaN SE 39.0 SE ... 61.0 57.0 1021.3 1018.7 NaN NaN 19.4 21.4 No No
91560 2012-08-12 GoldCoast 10.7 21.3 0.0 NaN NaN S 59.0 SSW ... 53.0 49.0 1020.2 1018.0 NaN NaN 17.2 19.6 No No
56309 2012-04-21 Ballarat 7.4 23.2 0.0 NaN NaN N 37.0 NNE ... 84.0 49.0 1016.0 1012.4 0.0 NaN 17.4 22.2 No Yes
51043 2014-09-11 Tuggeranong 1.7 16.9 0.0 NaN NaN W 44.0 W ... 52.0 44.0 1016.8 1015.5 NaN NaN 14.1 16.0 No No
36899 2010-01-06 WaggaWagga 18.6 34.5 6.0 12.2 13.7 W 54.0 NNE ... 59.0 11.0 1014.2 1011.6 0.0 2.0 24.9 33.6 Yes No
24788 2010-09-23 Penrith 14.2 20.4 NaN NaN NaN SE 24.0 SSW ... 80.0 65.0 NaN NaN NaN NaN 15.9 18.2 NaN No
18573 2010-04-04 NorahHead 15.9 21.7 13.4 NaN NaN S 46.0 S ... 69.0 73.0 1022.1 1019.6 NaN NaN 20.3 19.8 Yes No
23087 2014-08-20 NorfolkIsland 16.1 19.8 0.6 4.2 9.9 W 59.0 WNW ... 82.0 70.0 1010.8 1009.9 3.0 1.0 18.0 19.1 No No
108474 2017-03-21 Woomera 17.7 30.7 0.0 12.2 0.0 SW 43.0 SW ... 68.0 30.0 1011.9 1010.6 1.0 NaN 19.5 28.7 No No
86050 2014-05-03 Brisbane 15.4 21.1 13.8 6.0 9.0 WNW 44.0 W ... 52.0 33.0 1005.8 1004.3 5.0 6.0 16.9 19.2 Yes No
125817 2014-09-15 SalmonGums 0.3 21.9 0.0 NaN NaN SSW 33.0 ESE ... 45.0 37.0 NaN NaN NaN NaN 16.5 21.0 No No
729 2010-11-30 Albury 14.4 23.3 1.6 NaN NaN SE 35.0 SE ... 69.0 81.0 1015.0 1014.3 NaN NaN 17.4 19.5 Yes Yes
29397 2015-05-09 Richmond 4.0 21.2 0.0 NaN NaN W 35.0 NE ... 74.0 40.0 1017.9 1013.9 NaN 1.0 11.4 20.8 No No
41093 2013-07-03 Williamtown 7.0 18.1 0.2 2.6 8.7 NW 24.0 NW ... 96.0 62.0 1025.1 1021.5 2.0 1.0 9.2 17.6 No No
81516 2010-06-03 Dartmoor 5.2 15.5 0.2 0.2 2.1 WNW 39.0 NaN ... 99.0 78.0 1027.2 1024.9 NaN NaN 6.6 14.9 No No
72005 2013-08-29 Mildura 14.8 27.6 0.0 5.4 5.0 NW 63.0 NE ... 26.0 13.0 1006.7 1002.0 7.0 1.0 20.5 26.3 No No
131729 2013-12-02 Hobart 15.0 30.4 0.0 6.4 13.7 NNW 50.0 NNW ... 44.0 52.0 1012.7 1010.2 NaN NaN 20.9 21.7 No No
14699 2016-06-14 Moree 6.8 21.5 0.0 11.2 NaN E 30.0 ESE ... 67.0 43.0 1034.4 1031.4 NaN NaN 13.8 21.2 No No
57961 2016-12-27 Ballarat 14.3 26.3 3.8 NaN NaN ESE 50.0 NNE ... 98.0 86.0 1011.5 1009.9 8.0 8.0 20.0 22.6 Yes Yes

30 rows × 23 columns

In [9]:
dataFrameWeather.isna().sum()
Out[9]:
Date                 0
Location             0
MinTemp           1485
MaxTemp           1261
Rainfall          3261
Evaporation      62790
Sunshine         69835
WindGustDir      10326
WindGustSpeed    10263
WindDir9am       10566
WindDir3pm        4228
WindSpeed9am      1767
WindSpeed3pm      3062
Humidity9am       2654
Humidity3pm       4507
Pressure9am      15065
Pressure3pm      15028
Cloud9am         55888
Cloud3pm         59358
Temp9am           1767
Temp3pm           3609
RainToday         3261
RainTomorrow      3267
dtype: int64

Come possiamo notare il dataframe non presenta dati duplicati ma presenta molti dati NaN che andranno gestiti

In [10]:
import warnings
warnings.filterwarnings('ignore')
fig, ax = plt.subplots(figsize=(10, 10))
dataFrameWeather.hist(ax=ax)
plt.subplots_adjust(right=1.2, top=1.2)
plt.show()

Data cleaning e Pre-processing¶

Inizio con il riempimento dei dati nan. Parto con 'MinTemp' e 'MaxTemp'. Essendo pochi i dati mancanti, essi verranno sostituiti con la temperatura media di ciascuna colonna.

In [11]:
mean_MinTemp = dataFrameWeather['MinTemp'].mean()
mean_MaxTemp = dataFrameWeather['MaxTemp'].mean()

dataFrameWeather['MinTemp'].fillna(mean_MinTemp, inplace=True)
dataFrameWeather['MaxTemp'].fillna(mean_MaxTemp, inplace=True)
In [12]:
dataFrameWeather.isna().sum()
Out[12]:
Date                 0
Location             0
MinTemp              0
MaxTemp              0
Rainfall          3261
Evaporation      62790
Sunshine         69835
WindGustDir      10326
WindGustSpeed    10263
WindDir9am       10566
WindDir3pm        4228
WindSpeed9am      1767
WindSpeed3pm      3062
Humidity9am       2654
Humidity3pm       4507
Pressure9am      15065
Pressure3pm      15028
Cloud9am         55888
Cloud3pm         59358
Temp9am           1767
Temp3pm           3609
RainToday         3261
RainTomorrow      3267
dtype: int64

Andiamo ora a gestire il dato Rainfall. Esso indica se la quantità di acqua caduta in un giorno 'RainToday' = 'Yes' è strettamente maggiore di 1mm di acqua.

Si decide di droppare le tuple che contengono valori NaN in quanto la correzione richiederebbe troppo tempo per essere implementata visto il numero esiguo di tuple

In [13]:
dataFrameWeather.dropna(subset='Rainfall', inplace=True)
dataFrameWeather.isna().sum()
Out[13]:
Date                 0
Location             0
MinTemp              0
MaxTemp              0
Rainfall             0
Evaporation      60488
Sunshine         67820
WindGustDir       9725
WindGustSpeed     9665
WindDir9am        9789
WindDir3pm        3799
WindSpeed9am      1091
WindSpeed3pm      2647
Humidity9am       1554
Humidity3pm       3630
Pressure9am      13940
Pressure3pm      13993
Cloud9am         53331
Cloud3pm         56874
Temp9am            685
Temp3pm           2746
RainToday            0
RainTomorrow      1412
dtype: int64

Si decide di droppare, inoltre, le colonne 'WindDir3pm' and 'WindDir9pm' perchè risultano essere irrilevanti ed essere una specifica di una colonna già esistente. Stessa cosa con 'WindSpeed9am', 'WinSpeed3pm'

In [14]:
dataFrameWeather = dataFrameWeather.drop(columns=['WindDir3pm', 'WindDir9am', 'WindSpeed3pm', 'WindSpeed9am'], axis=1)
dataFrameWeather.info()
<class 'pandas.core.frame.DataFrame'>
Index: 142199 entries, 0 to 145459
Data columns (total 19 columns):
 #   Column         Non-Null Count   Dtype  
---  ------         --------------   -----  
 0   Date           142199 non-null  object 
 1   Location       142199 non-null  object 
 2   MinTemp        142199 non-null  float64
 3   MaxTemp        142199 non-null  float64
 4   Rainfall       142199 non-null  float64
 5   Evaporation    81711 non-null   float64
 6   Sunshine       74379 non-null   float64
 7   WindGustDir    132474 non-null  object 
 8   WindGustSpeed  132534 non-null  float64
 9   Humidity9am    140645 non-null  float64
 10  Humidity3pm    138569 non-null  float64
 11  Pressure9am    128259 non-null  float64
 12  Pressure3pm    128206 non-null  float64
 13  Cloud9am       88868 non-null   float64
 14  Cloud3pm       85325 non-null   float64
 15  Temp9am        141514 non-null  float64
 16  Temp3pm        139453 non-null  float64
 17  RainToday      142199 non-null  object 
 18  RainTomorrow   140787 non-null  object 
dtypes: float64(14), object(5)
memory usage: 21.7+ MB

Posso, inoltre, droppare le tuple mancanti nella colonna 'RainTomorrow' essendo di un numero molto esiguo ed essendo questo il parametro che vogliamo andare a predire

In [15]:
dataFrameWeather.dropna(subset='RainTomorrow', inplace=True)
plt.figure(figsize=(5,6))
ax = sb.countplot(x='RainTomorrow', data=dataFrameWeather, palette="Set1")
plt.show()
dataFrameWeather.info()
<class 'pandas.core.frame.DataFrame'>
Index: 140787 entries, 0 to 145458
Data columns (total 19 columns):
 #   Column         Non-Null Count   Dtype  
---  ------         --------------   -----  
 0   Date           140787 non-null  object 
 1   Location       140787 non-null  object 
 2   MinTemp        140787 non-null  float64
 3   MaxTemp        140787 non-null  float64
 4   Rainfall       140787 non-null  float64
 5   Evaporation    81093 non-null   float64
 6   Sunshine       73982 non-null   float64
 7   WindGustDir    131624 non-null  object 
 8   WindGustSpeed  131682 non-null  float64
 9   Humidity9am    139270 non-null  float64
 10  Humidity3pm    137286 non-null  float64
 11  Pressure9am    127044 non-null  float64
 12  Pressure3pm    127018 non-null  float64
 13  Cloud9am       88162 non-null   float64
 14  Cloud3pm       84693 non-null   float64
 15  Temp9am        140131 non-null  float64
 16  Temp3pm        138163 non-null  float64
 17  RainToday      140787 non-null  object 
 18  RainTomorrow   140787 non-null  object 
dtypes: float64(14), object(5)
memory usage: 21.5+ MB

Si continua con la sostituizione ai valori nan delle colonne 'Humidity9am', 'Humidity3pm', 'Temp9am', 'Temp3pm'. Qui si andrà ad effettuare una group-by sull'attributo 'RainToday', così da poter differenziare la media calcolata per giorni in cui piove e per giorni in cui non piove.

In [16]:
average_humidity = dataFrameWeather.groupby('RainToday')['Humidity9am'].transform('mean')
average_humidity2 = dataFrameWeather.groupby('RainToday')['Humidity3pm'].transform('mean')
average_temp = dataFrameWeather.groupby('RainToday')['Temp9am'].transform('mean')
average_temp2 = dataFrameWeather.groupby('RainToday')['Temp3pm'].transform('mean')
dataFrameWeather['Humidity9am'].fillna(average_humidity, inplace=True)
dataFrameWeather['Humidity3pm'].fillna(average_humidity2, inplace=True)
dataFrameWeather['Temp9am'].fillna(average_temp, inplace=True)
dataFrameWeather['Temp3pm'].fillna(average_temp2, inplace=True)
In [17]:
dataFrameWeather.isna().sum()
Out[17]:
Date                 0
Location             0
MinTemp              0
MaxTemp              0
Rainfall             0
Evaporation      59694
Sunshine         66805
WindGustDir       9163
WindGustSpeed     9105
Humidity9am          0
Humidity3pm          0
Pressure9am      13743
Pressure3pm      13769
Cloud9am         52625
Cloud3pm         56094
Temp9am              0
Temp3pm              0
RainToday            0
RainTomorrow         0
dtype: int64
In [18]:
plt.figure(figsize=(13,10))

plt.subplot(2, 2, 1)
fig = dataFrameWeather.Rainfall.hist(bins=10)
fig.set_xlabel('Rainfall')
fig.set_ylabel('RainTomorrow')

plt.subplot(2, 2, 2)
fig = dataFrameWeather.WindGustSpeed.hist(bins=10)
fig.set_xlabel('WinGustSpeed')
fig.set_ylabel('RainTomorrow')

plt.subplot(2, 2, 3)
fig = dataFrameWeather.Temp3pm.hist(bins=10)
fig.set_xlabel('Temp3pm')
fig.set_ylabel('RainTomorrow')

plt.subplot(2, 2, 4)
fig = dataFrameWeather.Humidity3pm.hist(bins=10)
fig.set_xlabel('Humidity')
fig.set_ylabel('RainTomorrow')
Out[18]:
Text(0, 0.5, 'RainTomorrow')

Si decide di droppare la colonna Sunshine in quanto è quella con la più bassa correlazione con gli altri attributi

In [19]:
dataFrameWeather = dataFrameWeather.drop(columns=['Sunshine'], axis=1)
dataFrameWeather.info()
<class 'pandas.core.frame.DataFrame'>
Index: 140787 entries, 0 to 145458
Data columns (total 18 columns):
 #   Column         Non-Null Count   Dtype  
---  ------         --------------   -----  
 0   Date           140787 non-null  object 
 1   Location       140787 non-null  object 
 2   MinTemp        140787 non-null  float64
 3   MaxTemp        140787 non-null  float64
 4   Rainfall       140787 non-null  float64
 5   Evaporation    81093 non-null   float64
 6   WindGustDir    131624 non-null  object 
 7   WindGustSpeed  131682 non-null  float64
 8   Humidity9am    140787 non-null  float64
 9   Humidity3pm    140787 non-null  float64
 10  Pressure9am    127044 non-null  float64
 11  Pressure3pm    127018 non-null  float64
 12  Cloud9am       88162 non-null   float64
 13  Cloud3pm       84693 non-null   float64
 14  Temp9am        140787 non-null  float64
 15  Temp3pm        140787 non-null  float64
 16  RainToday      140787 non-null  object 
 17  RainTomorrow   140787 non-null  object 
dtypes: float64(13), object(5)
memory usage: 20.4+ MB
In [20]:
dataFrameWeather.isna().sum()
Out[20]:
Date                 0
Location             0
MinTemp              0
MaxTemp              0
Rainfall             0
Evaporation      59694
WindGustDir       9163
WindGustSpeed     9105
Humidity9am          0
Humidity3pm          0
Pressure9am      13743
Pressure3pm      13769
Cloud9am         52625
Cloud3pm         56094
Temp9am              0
Temp3pm              0
RainToday            0
RainTomorrow         0
dtype: int64

Trasformo, a questo punto, la data dal formato gg-mm-aaaa in tre colonne diverse contenenti il giorno, il mese e l'anno. Questa soluzione sarà utile in seguito per fare imputation

In [21]:
df_copy = dataFrameWeather.copy()
# Aggiungo la colonna "month" al dataframe
df_copy['day'] = df_copy.Date.str.split("-", expand = True)[0]
df_copy['month'] = df_copy.Date.str.split("-", expand = True)[1]
# Aggiungo la colonna "year" al dataframe
df_copy['year'] = df_copy.Date.str.split("-", expand = True)[2]
df_copy = df_copy.drop(columns='Date', axis=1)
In [22]:
df_copy.isna().sum()
Out[22]:
Location             0
MinTemp              0
MaxTemp              0
Rainfall             0
Evaporation      59694
WindGustDir       9163
WindGustSpeed     9105
Humidity9am          0
Humidity3pm          0
Pressure9am      13743
Pressure3pm      13769
Cloud9am         52625
Cloud3pm         56094
Temp9am              0
Temp3pm              0
RainToday            0
RainTomorrow         0
day                  0
month                0
year                 0
dtype: int64

Adesso, per poter effettuare encoding attraverso l'uso di un encoder, ho bisogno di divedere il dataset in due datasets diversi, il primo che conterrà solo dati categorici così da poter fare encoding su di essi trasformandoli in dati numerici, e il resto conterrà solamente valori int e float

In [23]:
df_num = df_copy.select_dtypes(include=[np.number])
df_cat = df_copy.select_dtypes(include=['object'])

Uso un encoder per trasformare i dati categorici in dati numerici automaticamente

In [24]:
for attr in df_cat.columns:
    df_cat[attr] = LabelEncoder().fit_transform(df_cat[attr])
df_encoded = pd.concat([df_cat, df_num], axis = 1)
df_encoded.head()
Out[24]:
Location WindGustDir RainToday RainTomorrow day month year MinTemp MaxTemp Rainfall Evaporation WindGustSpeed Humidity9am Humidity3pm Pressure9am Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm
0 2 13 0 0 1 11 0 13.4 22.9 0.6 NaN 44.0 71.0 22.0 1007.7 1007.1 8.0 NaN 16.9 21.8
1 2 14 0 0 1 11 1 7.4 25.1 0.0 NaN 44.0 44.0 25.0 1010.6 1007.8 NaN NaN 17.2 24.3
2 2 15 0 0 1 11 2 12.9 25.7 0.0 NaN 46.0 38.0 30.0 1007.6 1008.7 NaN 2.0 21.0 23.2
3 2 4 0 0 1 11 3 9.2 28.0 0.0 NaN 24.0 45.0 16.0 1017.6 1012.8 NaN NaN 18.1 26.5
4 2 13 0 0 1 11 4 17.5 32.3 1.0 NaN 41.0 82.0 33.0 1010.8 1006.0 7.0 8.0 17.8 29.7

Data l'elevato numero di tuple NaN rimanenti nelle colonne si è deciso di utilizzare un imputer. L'imputation è una tecnica utilizzata nella fase di preprocessing dei dati per gestire i valori mancanti in un dataset. L'imputer viene utilizzato per riempire i valori mancanti con valori appropriati in base alle caratteristiche dei dati.

In [25]:
df_encoded.info()
<class 'pandas.core.frame.DataFrame'>
Index: 140787 entries, 0 to 145458
Data columns (total 20 columns):
 #   Column         Non-Null Count   Dtype  
---  ------         --------------   -----  
 0   Location       140787 non-null  int32  
 1   WindGustDir    140787 non-null  int32  
 2   RainToday      140787 non-null  int32  
 3   RainTomorrow   140787 non-null  int32  
 4   day            140787 non-null  int32  
 5   month          140787 non-null  int32  
 6   year           140787 non-null  int32  
 7   MinTemp        140787 non-null  float64
 8   MaxTemp        140787 non-null  float64
 9   Rainfall       140787 non-null  float64
 10  Evaporation    81093 non-null   float64
 11  WindGustSpeed  131682 non-null  float64
 12  Humidity9am    140787 non-null  float64
 13  Humidity3pm    140787 non-null  float64
 14  Pressure9am    127044 non-null  float64
 15  Pressure3pm    127018 non-null  float64
 16  Cloud9am       88162 non-null   float64
 17  Cloud3pm       84693 non-null   float64
 18  Temp9am        140787 non-null  float64
 19  Temp3pm        140787 non-null  float64
dtypes: float64(13), int32(7)
memory usage: 18.8 MB
In [26]:
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer

imputer = IterativeImputer()
df_imputed = df_encoded.copy()
df_imputed = imputer.fit_transform(df_imputed) #imputation dei valori mancanti
df_imputed = pd.DataFrame(df_imputed, columns=df_encoded.columns) #conversione in dataframe pandas
df_imputed.head(100)
Out[26]:
Location WindGustDir RainToday RainTomorrow day month year MinTemp MaxTemp Rainfall Evaporation WindGustSpeed Humidity9am Humidity3pm Pressure9am Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm
0 2.0 13.0 0.0 0.0 1.0 11.0 0.0 13.4 22.9 0.6 5.978670 44.0 71.0 22.0 1007.7 1007.1 8.000000 5.039727 16.9 21.8
1 2.0 14.0 0.0 0.0 1.0 11.0 1.0 7.4 25.1 0.0 6.242761 44.0 44.0 25.0 1010.6 1007.8 1.776385 2.605029 17.2 24.3
2 2.0 15.0 0.0 0.0 1.0 11.0 2.0 12.9 25.7 0.0 8.273388 46.0 38.0 30.0 1007.6 1008.7 2.037233 2.000000 21.0 23.2
3 2.0 4.0 0.0 0.0 1.0 11.0 3.0 9.2 28.0 0.0 6.242160 24.0 45.0 16.0 1017.6 1012.8 1.371820 2.029181 18.1 26.5
4 2.0 13.0 0.0 0.0 1.0 11.0 4.0 17.5 32.3 1.0 7.194498 41.0 82.0 33.0 1010.8 1006.0 7.000000 8.000000 17.8 29.7
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
95 2.0 10.0 0.0 0.0 2.0 2.0 7.0 11.0 30.2 0.0 6.895401 24.0 54.0 20.0 1017.0 1014.7 1.857876 2.275389 17.6 28.8
96 2.0 5.0 0.0 0.0 2.0 2.0 8.0 13.8 31.8 0.0 7.556472 24.0 49.0 28.0 1019.7 1015.9 2.302235 2.598391 18.6 30.5
97 2.0 4.0 0.0 1.0 2.0 2.0 9.0 15.5 32.0 0.0 8.374073 50.0 51.0 25.0 1019.5 1016.2 3.150445 3.638528 20.1 30.8
98 2.0 4.0 1.0 0.0 2.0 2.0 10.0 18.4 30.5 1.2 7.175351 44.0 57.0 23.0 1021.3 1018.0 3.344974 2.861642 21.5 29.6
99 2.0 13.0 0.0 1.0 2.0 2.0 11.0 20.9 25.7 0.0 6.882608 37.0 52.0 90.0 1019.5 1018.9 6.130625 8.000000 22.2 18.8

100 rows × 20 columns

In [27]:
df_imputed.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 140787 entries, 0 to 140786
Data columns (total 20 columns):
 #   Column         Non-Null Count   Dtype  
---  ------         --------------   -----  
 0   Location       140787 non-null  float64
 1   WindGustDir    140787 non-null  float64
 2   RainToday      140787 non-null  float64
 3   RainTomorrow   140787 non-null  float64
 4   day            140787 non-null  float64
 5   month          140787 non-null  float64
 6   year           140787 non-null  float64
 7   MinTemp        140787 non-null  float64
 8   MaxTemp        140787 non-null  float64
 9   Rainfall       140787 non-null  float64
 10  Evaporation    140787 non-null  float64
 11  WindGustSpeed  140787 non-null  float64
 12  Humidity9am    140787 non-null  float64
 13  Humidity3pm    140787 non-null  float64
 14  Pressure9am    140787 non-null  float64
 15  Pressure3pm    140787 non-null  float64
 16  Cloud9am       140787 non-null  float64
 17  Cloud3pm       140787 non-null  float64
 18  Temp9am        140787 non-null  float64
 19  Temp3pm        140787 non-null  float64
dtypes: float64(20)
memory usage: 21.5 MB

Come si può vedere l'encoder ha trasformato i dati che prima erano int in float. Castizzo quindi queste colonne in dati int nuovamente

In [28]:
df_imputed['Location'] = df_imputed['Location'].astype(int)
df_imputed['WindGustDir'] = df_imputed['WindGustDir'].astype(int)
df_imputed['RainTomorrow'] = df_imputed['RainTomorrow'].astype(int)
df_imputed['RainToday'] = df_imputed['RainToday'].astype(int)
df_imputed['day'] = df_imputed['day'].astype(int)
df_imputed['month'] = df_imputed['month'].astype(int)
df_imputed['year'] = df_imputed['year'].astype(int)
df_imputed.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 140787 entries, 0 to 140786
Data columns (total 20 columns):
 #   Column         Non-Null Count   Dtype  
---  ------         --------------   -----  
 0   Location       140787 non-null  int32  
 1   WindGustDir    140787 non-null  int32  
 2   RainToday      140787 non-null  int32  
 3   RainTomorrow   140787 non-null  int32  
 4   day            140787 non-null  int32  
 5   month          140787 non-null  int32  
 6   year           140787 non-null  int32  
 7   MinTemp        140787 non-null  float64
 8   MaxTemp        140787 non-null  float64
 9   Rainfall       140787 non-null  float64
 10  Evaporation    140787 non-null  float64
 11  WindGustSpeed  140787 non-null  float64
 12  Humidity9am    140787 non-null  float64
 13  Humidity3pm    140787 non-null  float64
 14  Pressure9am    140787 non-null  float64
 15  Pressure3pm    140787 non-null  float64
 16  Cloud9am       140787 non-null  float64
 17  Cloud3pm       140787 non-null  float64
 18  Temp9am        140787 non-null  float64
 19  Temp3pm        140787 non-null  float64
dtypes: float64(13), int32(7)
memory usage: 17.7 MB
In [29]:
warnings.filterwarnings('ignore')
fig, ax = plt.subplots(figsize=(10, 10))
df_imputed.hist(ax=ax)
plt.subplots_adjust(right=1.2, top=1.2)
plt.show()
In [30]:
num = ['Humidity3pm', 'Humidity9am', 'Temp3pm', 'MinTemp', 'MaxTemp', 'Temp9am']
sb.pairplot(df_imputed[num], kind='scatter', diag_kind='hist', palette='Rainbow')
plt.show()

Come si può vedere dal pairplot sovrastante, il dataset risulta essere molto popolato nelle zone centrali. Ad uno sguardo più attento, però, non possono sfuggire i numerosi outlier che si riescono ad intravedere soprattutto nei grafici contenenti "Temp9am" e "Temp3am". Si decide perciò di attuare una tecnica di eliminazione degli outlier chiamata DBSCAN, che attraverso tecniche di clusterizzazione riesce a riconoscerli e ad eliminarli.

Outlier detection¶

In [31]:
df_imputed.shape
Out[31]:
(140787, 20)
In [32]:
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler

# Standardize the features
scaler = StandardScaler()
df_scaled = scaler.fit_transform(df_imputed)

# Create DBSCAN object
dbscan = DBSCAN(eps=1.5, min_samples=4)

# Perform clustering
clusters = dbscan.fit_predict(df_scaled)

# Add the cluster labels to the DataFrame
df_imputed['Cluster'] = clusters
df_imputed[df_imputed['Cluster'] == -1].sum()

#df_no_outliers = df_imputed[clusters != -1]
# Visualize the clusters
plt.scatter(df_imputed['Temp9am'], df_imputed['Temp3pm'], c=df_imputed['Cluster'], cmap='viridis')
plt.xlabel('Temp9am')
plt.ylabel('Temp3pm')
plt.title('DBSCAN Clustering')
plt.show()
In [33]:
df_imputed = df_imputed[clusters != -1]

plt.scatter(df_imputed['Temp9am'], df_imputed['Temp3pm'], c=df_imputed['Cluster'], cmap='viridis')
plt.xlabel('Temp9am')
plt.ylabel('Temp3pm')
plt.title('DBSCAN Clustering')
plt.show()
In [34]:
df_imputed = df_imputed.drop(columns=['Cluster'], axis=1)

Eseguiamo una nuova stampa per vedere come è cambiata la situazione dopo aver lanciato il DBSCAN per l'eliminazione degli outlier. Come si può, appunto, apprezzare nel grafico sopra, la situazione è decisamente migliorata, con un numero di outlier che è visivamente molto inferiore alla situazione precedente.

In [35]:
num = ['Humidity3pm', 'Humidity9am', 'Temp3pm', 'MinTemp', 'MaxTemp', 'Temp9am']
sb.pairplot(df_imputed[num], kind='scatter', diag_kind='hist', palette='Rainbow')
plt.show()
In [36]:
df_imputed.shape
Out[36]:
(79102, 20)

Come possiamo vedere, dopo aver eseguito il DBSCAN, abbiamo eliminato quasi 70 mila tuple di outliers dal nostro dataset

Correlazione e importanza delle features¶

Plotto la matrice di correlazione servendoci di una heatmap per capire il grado di correlazione tra i vari dati

In [37]:
#creo la matrice di correlazione
matr_corr = df_imputed.corr()

#stampo la matrice di correlazione
plt.figure(figsize=(12,12))
sb.heatmap(matr_corr, annot=True, cmap='coolwarm', fmt=".2f")
plt.show()

Da una veloce analisi della matrice di correlazione possiamo notare come alcuni attributi sono bassamente correlati agli altri. Questo ci permette di dropparli senza problemi così da ridurre il numero di features da tenere in considerazione. Inoltre la nostra analisi si baserà sul mantenere quelle che sono le feature più correlate con il nostro attributo obbiettivo, ovvero 'RainTomorrow'

Andiamo adesso a studiare l'importanza delle feature. Sappiamo di voler fare predizione sull'attributo 'RainTomorrow', andiamo quindi a creare un RandomForest che ci permetterà di capire quali sono gli attributi più importanti a riguardo.

In [38]:
df_X = df_imputed.drop(['RainTomorrow'], axis=1)
df_Y = df_imputed['RainTomorrow']

forest = RandomForestClassifier()
forest.fit(df_X, df_Y)

attributi= df_X.columns
importances= forest.feature_importances_ #grado di importanza fra gli attributi
index= np.argsort(importances) #mappa ogni grado di importanza con l'attributo corrispondente

plt.figure(figsize=(10,10))
plt.title('Grado di importanza fra gli attributi')
plt.barh(range(len(index)),importances[index],color='r',align='center')
plt.yticks(range(len(index)),attributi[index])
plt.show()
In [39]:
df_imputed.shape
Out[39]:
(79102, 20)

In seguit ad una attenta analisi, si decide di droppare le colonne 'day', 'month', 'year', 'Location', 'WindGustDir', 'Temp9am', 'MinTemp', in quanto risultano essere quelle sia meno correlate che meno importanti per quanto riguarda il nostro attributo obbiettivo da dover predirre.

In [40]:
df_imputed = df_imputed.drop(['day', 'month', 'year', 'Location', 'WindGustDir', 'Temp9am', 'MinTemp'], axis=1)

Bilanciamento del dataset¶

Per poter proseguire con i primi classificatori bisogna prima bilanciare il dataset. In particolare l'attributo 'RainTomorrow' sulla quale bisogna fare predizione risulta essere molto sbilanciato da come si può vedere dal grafico sottostante

In [41]:
plt.figure(figsize=(4, 4))
ax = sb.countplot(x=df_imputed['RainTomorrow'])
plt.bar_label(ax.containers[0])
plt.show()

Ciò che verrà fatto è quello di utilizzare il sampling, in particolare verrà fatto oversampling sui dati per bilanciarli

In [42]:
X = df_imputed.drop(['RainTomorrow'], axis=1)
y = df_imputed['RainTomorrow']

#Utilizzo lo smote per fare oversampling e aumentare il numero di istanze 'RainTomorrow' = 'Yes' (1)
smote = SMOTE(random_state=35)
X_smoted, y_smoted = smote.fit_resample(X,y)
df_balanced = pd.concat([X_smoted, y_smoted],axis=1)

plt.figure(figsize=(4, 4))
ax = sb.countplot(x=df_balanced['RainTomorrow'])
plt.bar_label(ax.containers[0])
plt.show()
In [43]:
df_balanced.shape
Out[43]:
(141524, 13)

Come possiamo vedere, il bilanciamento del dataset ha aumentato il numero di tuple da 70 mila a 141 mila. Non abbiamo bisogno di fare campionamento in quanto il risultato risulta avere, ancora, un numero accettabile per fare classificazione.

Salvo il dataframe in un file .csv

In [44]:
df_balanced.to_csv('df_final.csv', index=False)

Metriche per la valutazione dei risultati¶

In [45]:
image = 'confusion_matrix.png'
Image(filename=os.path.join(image),width=300)
Out[45]:

Per introdurre le metriche che verranno utilizzate durante le analisi dobbiamo fare riferimento alla matrice di confusione. Questa è una matrice quadrata in cui le righe sono i valori corretti della classe mentre le colonne sono i valori predetti dal modello per la classe. Un problema di classificazione binaria presenta la seguente matrice di confusione

A questo punto possiamo definire le metriche come segue:

  • Accuracy (accuratezza): L'Accuracy rappresenta la percentuale di previsioni corrette rispetto al totale delle istanze valutate. È una misura di quanto il modello sia in grado di classificare correttamente gli esempi.

  • Error Rate (tasso di errore): L'Error Rate è la misura complementare all'Accuracy e rappresenta la percentuale di previsioni errate rispetto al totale delle istanze valutate. Indica la percentuale di errori compiuti dal modello di classificazione.

  • Precision (precisione): La Precision rappresenta la percentuale di previsioni positive corrette rispetto al totale delle previsioni positive effettuate dal modello. Misura l'accuratezza delle previsioni positive e fornisce un'indicazione sulla proporzione di elementi correttamente identificati come positivi rispetto al numero totale di elementi identificati come positivi.

  • Recall (richiamo): La Recall rappresenta la percentuale di previsioni positive corrette rispetto al totale dei veri positivi e dei falsi negativi. Misura la capacità del modello di individuare correttamente gli elementi positivi, considerando sia i veri positivi (elementi correttamente classificati come positivi) che i falsi negativi (elementi erroneamente classificati come negativi).

  • F-measure (misura F): L'F-measure rappresenta la media armonica tra Precision e Recall. È una misura che tiene conto sia della precisione che del richiamo del modello. L'F-measure fornisce un singolo valore che combina queste due misure, consentendo di valutare complessivamente le prestazioni del modello.

Per calcolare queste metriche, è necessario utilizzare i valori corrispondenti dalla matrice di confusione (TP, TN, FP, FN). Ogni metrica ha una formula specifica per il calcolo basata su questi valori.

Calcolate nel seguente modo.

\begin{align*} \text{Accuracy} &= \frac{TP + TN}{TP + TN + FP + FN} \\ \text{Error Rate} &= \frac{FP + FN}{TP + TN + FP + FN} \\ \text{Precision} &= \frac{TP}{TP + FP} \\ \text{Recall} &= \frac{TP}{TP + FN} \\ \text{F1-Score} &= 2 \cdot \frac{\text{Precision} \cdot \text{Recall}}{\text{Precision} + \text{Recall}} \end{align*}

Vediamo la curva di AUC-ROC

In [46]:
image2 = 'auc-roc.png'
Image(filename=os.path.join(image2),width=350)
Out[46]:
\begin{align*} \text{Sensitivity} &= \text{Recall} = \frac{TP}{TP + FN} \\ \text{Specificity} &= \frac{TN}{TN + FP} \\ \text{False Positive Rate} &= 1 - \text{Specificity} = \frac{FP}{TN + FP} \\ \end{align*}

La curva ROC (Receiver Operating Characteristic) è un grafico utilizzato per valutare le prestazioni di un modello di classificazione binaria. Mostra come variano il tasso di veri positivi (TPR) e il tasso di falsi positivi (FPR) al variare della soglia di classificazione del modello.

Nella curva ROC, l'asse verticale rappresenta la capacità del modello di individuare correttamente gli esempi positivi (TPR). Più il TPR è alto, migliore è il modello nell'identificare correttamente gli esempi positivi.

L'asse orizzontale rappresenta la proporzione di esempi negativi erroneamente classificati come positivi (FPR). Più il FPR è basso, migliore è il modello nel limitare gli errori di classificazione degli esempi negativi.

La curva ROC viene generata variando la soglia di classificazione del modello. Per ogni soglia, vengono calcolati il TPR e il FPR corrispondenti. Ogni punto sulla curva rappresenta una combinazione di TPR e FPR a una specifica soglia.

Definisco a questo punto dei dataframe utili per il salvataggio dei dati

In [47]:
# data frame per risultati dei modelli per la cross-validation
results= pd.DataFrame(columns=['NomeModello','Accuracy','ErrorRate','Precision','Recall','F-score','Time'])
In [48]:
def valuta_performance(nomeModello, modello, X, y, data,temp):
    indici = data[data['NomeModello'] == nomeModello].index
    for j in range(len(indici)):
        data.drop(indici[j], inplace = True)
    accuracy = cross_val_score(modello, X ,y, cv=10 ,scoring="accuracy").mean()
    error = (1-accuracy)
    precision = cross_val_score(modello, X ,y, cv=10 ,scoring="average_precision").mean()
    recall = cross_val_score(modello, X ,y, cv=10 ,scoring="recall_micro").mean()
    f_score = cross_val_score(modello, X ,y, cv=10 ,scoring="f1_micro").mean()

    row = pd.DataFrame({'NomeModello': [nomeModello],
                        'Accuracy': [accuracy],
                        'ErrorRate': [error],
                        'Precision': [precision],
                        'Recall': [recall],
                        'F-score': [f_score],
                        'Time': [temp]})

    data = pd.concat([data, row], ignore_index=True)

    return data
In [182]:
# dataframe utili per il plotting delle performance dei classificatori
models=["Albero_Decisionale", "Albero_Decisionale_CV", "Naive_Bayes","Naive_Bayes_CV", "LogReg","LogReg_CV", "KNN","KNN_CV","SGD", "SGD_CV", "Voting", "Random_Forest","AdaBoost",'XGBoost']
results_test= pd.DataFrame(index=models,columns=["accuracy","balanced_accuracy","precision","w_precision","recall","w_recall","f1"])
results_train= pd.DataFrame(index=models,columns=["accuracy","balanced_accuracy","precision","w_precision","recall","w_recall","f1"])
In [50]:
# dataframe specifico per le ANN
models_neural=["ANN","MLP"]
results_test_neural= pd.DataFrame(index=models_neural,columns=["accuracy","balanced_accuracy","precision","w_precision","recall","w_recall","f1"])
In [51]:
# funzione per salvare gli score dei relativi modelli
def set_scores(df,model,label,predicted):
    df.loc[model]["f1"]=f1_score(label, predicted)
    df.loc[model]["accuracy"]=accuracy_score(label, predicted)
    df.loc[model]["recall"]=recall_score(label, predicted)
    df.loc[model]["precision"]=precision_score(label, predicted)
    df.loc[model]["balanced_accuracy"]=balanced_accuracy_score(label, predicted)
    df.loc[model]["w_recall"]=recall_score(label, predicted,average='weighted')
    df.loc[model]["w_precision"]=precision_score(label, predicted,average='weighted')
In [52]:
# Lista degli score per la ROC
lista_predizioni = []

Classificatori¶

In [53]:
df_final = pd.read_csv('df_final.csv')

Finito il lavoro di pulizia, bilanciamento ed esplorazione del dataset si può passare alla fase di training dei classificatori. Dividiamo il dataset in training_set e test_set

  • training_set utilizzato per costruire il modello che sarà formato dai 2/3 ddel dataset
  • test_set, utilizzato per testare le performance del modello e sarà composto dalla restante parte del dataset
In [54]:
#Divido il dataset in test_set e training_set
df_X = df_final.drop('RainTomorrow', axis=1)
df_y = df_final['RainTomorrow']

X_train,X_test,y_train,y_test = train_test_split(df_X, df_y, test_size=.33, random_state=42)
y_train = y_train.values.ravel()

La politica che verrà adottata di seguito sarà quella di utilizzare vari classificatori e, per i migliori di essi, verrà provato a usare il GridSearchCV per provare a migliorare eventuali risultati ottenuti.

Albero Decisionale¶

Un albero decisionale (o decision tree) è una struttura di dati ad albero utilizzata per prendere decisioni o fare previsioni in base a una serie di condizioni o attributi. Può essere utilizzato per classificare esempi in diverse categorie o per stimare valori numerici (regressione). Gli alberi decisionali possono anche essere utilizzati per la selezione delle caratteristiche, l'individuazione di outlier e altre attività di analisi dei dati.

Risultati sul training set¶

Per quanto riguarda l'albero decisionale, con una prima prova esso andava in overfitting sul train set. Si è deciso di intervenire su di esso facendo pre-pruning, ovvero è stata fissata una altezza massima dell'albero pari a 15 così da evitare il problema indesiderato.

In [55]:
start = time.time()
albero = DecisionTreeClassifier(random_state=42, max_depth=15)
albero = albero.fit(X_train, y_train)
stop = time.time()-start
In [56]:
result = valuta_performance('Albero Decisionale', albero, X_train, y_train, results, stop)
In [57]:
y_pred_train_albero = albero.predict(X_train)
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_albero)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_albero))
              precision    recall  f1-score   support

           0       0.99      0.98      0.98     47408
           1       0.98      0.99      0.98     47413

    accuracy                           0.98     94821
   macro avg       0.98      0.98      0.98     94821
weighted avg       0.98      0.98      0.98     94821

In [58]:
set_scores(results_train, "Albero_Decisionale", y_train, y_pred_train_albero)

Risultati sul test set¶

In [59]:
y_pred_test_albero=albero.predict(X_test)
lista_predizioni.append(y_pred_test_albero)
In [60]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_albero)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_albero))
              precision    recall  f1-score   support

           0       0.95      0.94      0.95     23354
           1       0.94      0.95      0.95     23349

    accuracy                           0.95     46703
   macro avg       0.95      0.95      0.95     46703
weighted avg       0.95      0.95      0.95     46703

In [61]:
set_scores(results_test, "Albero_Decisionale", y_test, y_pred_test_albero)
In [62]:
print("max_features: ", albero.max_features_)
print("max_depth: ", albero.tree_.max_depth)
print("min_samples_split: ", albero.min_samples_split)
print("min_samples_leaf: ", albero.min_samples_leaf)
max_features:  12
max_depth:  15
min_samples_split:  2
min_samples_leaf:  1

Albero decisionale (GridSearchCV)¶

In [63]:
albero_grid_search = DecisionTreeClassifier()

grid = {'max_features': ['sqrt', 'log2'],
        'max_depth': [12, 14, 16],
        'min_samples_split': [2, 4, 5],
        'min_samples_leaf': [2, 3, 5],
        'criterion':['gini','entropy'],
        'splitter':['best','random'],
       }

grid_search = GridSearchCV(
                estimator=albero_grid_search,
                param_grid=grid,
                cv=5,
                scoring="recall", #ci interessa la recall rispetto alla precision
                n_jobs=-1)

grid_search_result = grid_search.fit(X_train, y_train)

albero_grid_search = grid_search.best_estimator_
print(albero_grid_search)
DecisionTreeClassifier(max_depth=16, max_features='log2', min_samples_leaf=3,
                       min_samples_split=5)

Risultati Train Set¶

In [64]:
y_pred_train_albero_grid_search = albero_grid_search.predict(X_train)
In [65]:
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_albero_grid_search)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_albero_grid_search))
              precision    recall  f1-score   support

           0       0.97      0.96      0.97     47408
           1       0.96      0.97      0.97     47413

    accuracy                           0.97     94821
   macro avg       0.97      0.97      0.97     94821
weighted avg       0.97      0.97      0.97     94821

In [66]:
set_scores(results_train, "Albero_Decisionale_CV", y_train, y_pred_train_albero_grid_search)

Risultati Test set¶

In [67]:
y_pred_test_albero_grid_search = albero_grid_search.predict(X_test)
In [68]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_albero_grid_search)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_albero_grid_search))
              precision    recall  f1-score   support

           0       0.94      0.93      0.93     23354
           1       0.93      0.94      0.93     23349

    accuracy                           0.93     46703
   macro avg       0.93      0.93      0.93     46703
weighted avg       0.93      0.93      0.93     46703

In [69]:
set_scores(results_test, "Albero_Decisionale_CV", y_train, y_pred_train_albero_grid_search)

Naive Bayes¶

Il classificatore di Bayes, o classificatore bayesiano, è un algoritmo di apprendimento automatico che si basa sul teorema di Bayes per effettuare la classificazione di oggetti in diverse categorie. Questo classificatore assume che la presenza di una caratteristica specifica in un oggetto sia indipendente dalle altre caratteristiche dell'oggetto. Il classificatore di Bayes calcola le probabilità per ogni classe e assegna l'oggetto alla classe con la probabilità più alta. Ciò richiede la stima delle probabilità a priori delle classi e la stima delle probabilità condizionate delle caratteristiche date le classi.

Il classificatore di Bayes può essere un potente strumento di classificazione, ma la sua accuratezza dipende dall'assunzione di indipendenza delle caratteristiche e dalla correttezza delle probabilità a priori stimate.

In [70]:
start = time.time()
nb = GaussianNB()
nb = nb.fit(X_train, y_train)
stop = time.time() - start
In [71]:
results = valuta_performance("Naive Bayes Gaussian", nb, X_train, y_train, results, stop)

Risultati Train set¶

In [72]:
y_pred_train_bayes = nb.predict(X_train)
In [73]:
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_bayes)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_bayes))
              precision    recall  f1-score   support

           0       0.83      0.86      0.84     47408
           1       0.85      0.82      0.84     47413

    accuracy                           0.84     94821
   macro avg       0.84      0.84      0.84     94821
weighted avg       0.84      0.84      0.84     94821

In [74]:
set_scores(results_train,"Naive_Bayes", y_train, y_pred_train_bayes)

Risultati Test set¶

In [75]:
y_pred_test_bayes = nb.predict(X_test)
lista_predizioni.append(y_pred_test_bayes)
In [76]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_bayes)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_bayes))
              precision    recall  f1-score   support

           0       0.83      0.85      0.84     23354
           1       0.85      0.83      0.84     23349

    accuracy                           0.84     46703
   macro avg       0.84      0.84      0.84     46703
weighted avg       0.84      0.84      0.84     46703

In [77]:
set_scores(results_test, "Naive_Bayes", y_test, y_pred_test_bayes)
print("var_smoothing: " + str(nb.var_smoothing))
var_smoothing: 1e-09

Naive Bayes (SearchGridCV)¶

In [78]:
naive_bayes_search_grid = GaussianNB()

grid_param = {'var_smoothing': [1e-010, 1e-09, 1e-08]}

grid_search = GridSearchCV(
    estimator=nb,
    param_grid=grid_param,
    cv=5,
    scoring="recall",
    n_jobs=-1
)

grid_search.fit(X_train,y_train)

naive_bayes_search_grid=grid_search.best_estimator_
print(grid_search.best_estimator_)
GaussianNB(var_smoothing=1e-10)

Risutati Train set¶

In [79]:
y_train_naive_bayes_grid_search=naive_bayes_search_grid.predict(X_train)
confusion_matrix = metrics.confusion_matrix(y_train,y_train_naive_bayes_grid_search)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_train_naive_bayes_grid_search))
              precision    recall  f1-score   support

           0       0.83      0.86      0.84     47408
           1       0.85      0.82      0.84     47413

    accuracy                           0.84     94821
   macro avg       0.84      0.84      0.84     94821
weighted avg       0.84      0.84      0.84     94821

In [80]:
set_scores(results_train,"Naive_Bayes_CV",y_train,y_train_naive_bayes_grid_search)

Risultati Test set¶

In [81]:
y_test_naive_bayes_grid_search=naive_bayes_search_grid.predict(X_test)
lista_predizioni.append(y_test_naive_bayes_grid_search)
In [82]:
confusion_matrix = metrics.confusion_matrix(y_test,y_test_naive_bayes_grid_search)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_test_naive_bayes_grid_search))
              precision    recall  f1-score   support

           0       0.83      0.85      0.84     23354
           1       0.85      0.83      0.84     23349

    accuracy                           0.84     46703
   macro avg       0.84      0.84      0.84     46703
weighted avg       0.84      0.84      0.84     46703

In [83]:
set_scores(results_test,"Naive_Bayes_CV",y_test,y_test_naive_bayes_grid_search)

Logistic Regression¶

La regressione logistica è un algoritmo di apprendimento automatico utilizzato per affrontare problemi di classificazione binaria o multi-classe. A differenza della regressione lineare, che si occupa di predire un valore numerico continuo, la regressione logistica predice la probabilità che un'istanza appartenga a una determinata classe.

La regressione logistica si basa sulla funzione logistica, anche nota come funzione sigmoide, che mappa un valore reale in un intervallo compreso tra 0 e 1. La funzione sigmoide è definita come:

Risultati sul Train set¶

In [84]:
start= time.time()
logreg = LogisticRegression(random_state=42,max_iter=500) #aumento iterazioni per la convergenza dell'algoritmo
logreg = logreg.fit(X_train, y_train)
stop= time.time()-start
In [85]:
results= valuta_performance('Logistic Regression', logreg, X_train, y_train, results, stop)
In [86]:
y_pred_train_logreg = logreg.predict(X_train)
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_logreg)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_logreg))
              precision    recall  f1-score   support

           0       0.87      0.85      0.86     47408
           1       0.86      0.88      0.87     47413

    accuracy                           0.86     94821
   macro avg       0.86      0.86      0.86     94821
weighted avg       0.86      0.86      0.86     94821

In [87]:
set_scores(results_train, 'LogReg', y_train, y_pred_train_logreg)

Risultati Test set¶

In [88]:
y_pred_test_logreg = logreg.predict(X_test)
lista_predizioni.append(y_pred_test_logreg)
In [89]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_logreg)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_logreg))
              precision    recall  f1-score   support

           0       0.87      0.85      0.86     23354
           1       0.85      0.88      0.86     23349

    accuracy                           0.86     46703
   macro avg       0.86      0.86      0.86     46703
weighted avg       0.86      0.86      0.86     46703

In [90]:
set_scores(results_test, "LogReg", y_test, y_pred_test_logreg)

Logistic Regression (GridSearchCV)¶

In [91]:
logistic_regression_grid_search = LogisticRegression(random_state=42, max_iter=500)

param_grid = {
    'C': [10,1.0,0.1],
    'solver': ['newton-cg'],
    'penalty': ['l2']
}

grid_search = GridSearchCV(
    estimator=logistic_regression_grid_search,
    param_grid=param_grid,
    n_jobs=-1,
    cv=5,
    scoring="recall"
)

grid_final = grid_search.fit(X_train,y_train)

logistic_regression_grid_search = grid_search.best_estimator_
print(grid_search.best_estimator_)
LogisticRegression(C=10, max_iter=500, random_state=42, solver='newton-cg')

Risultati Train set¶

In [92]:
y_pred_train_logistic_regression = logistic_regression_grid_search.predict(X_train)
In [93]:
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_logistic_regression)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_logistic_regression))
              precision    recall  f1-score   support

           0       0.88      0.86      0.87     47408
           1       0.86      0.88      0.87     47413

    accuracy                           0.87     94821
   macro avg       0.87      0.87      0.87     94821
weighted avg       0.87      0.87      0.87     94821

In [94]:
set_scores(results_train, "LogReg_CV", y_train, y_pred_train_logistic_regression)

Risultati Test set¶

In [95]:
y_pred_test_logistic_regression = logistic_regression_grid_search.predict(X_test)
In [96]:
lista_predizioni.append(y_pred_test_logistic_regression)
In [97]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_logistic_regression)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_logistic_regression))
              precision    recall  f1-score   support

           0       0.88      0.86      0.87     23354
           1       0.86      0.88      0.87     23349

    accuracy                           0.87     46703
   macro avg       0.87      0.87      0.87     46703
weighted avg       0.87      0.87      0.87     46703

In [98]:
set_scores(results_test,"LogReg_CV",y_test,y_pred_test_logistic_regression)

K Nearest Neighbors¶

K-nearest neighbors (K-NN) è un algoritmo di apprendimento automatico utilizzato per la classificazione e la regressione. L'idea principale di K-NN è che gli oggetti che sono "vicini" l'uno all'altro nello spazio delle caratteristiche condividono spesso la stessa etichetta di classe o valore di output.

In [99]:
start= time.time()
Knn = KNeighborsClassifier(n_neighbors=5)
Knn = Knn.fit(X_train,y_train)
stop= time.time()-start
In [100]:
import warnings
warnings.filterwarnings('ignore')
results= valuta_performance('KNN classifier', Knn, X_train, y_train, results,stop)

Uso K-nearest Neighbors con n_neighbors = 5, che è il lvalore utilizzato di default

Risultati Train set¶

In [101]:
y_pred_train_Knn = Knn.predict(X_train)
lista_predizioni.append(y_pred_train_Knn)
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_Knn)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_Knn))
              precision    recall  f1-score   support

           0       1.00      0.89      0.94     47408
           1       0.90      1.00      0.95     47413

    accuracy                           0.94     94821
   macro avg       0.95      0.94      0.94     94821
weighted avg       0.95      0.94      0.94     94821

In [102]:
set_scores(results_train, "KNN", y_train, y_pred_train_Knn)

Risultati Test Set¶

In [103]:
y_pred_test_Knn = Knn.predict(X_test)
In [104]:
#lista_predizioni.append(y_pred_test_Knn)
In [105]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_Knn)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_Knn))
              precision    recall  f1-score   support

           0       0.99      0.85      0.92     23354
           1       0.87      0.99      0.93     23349

    accuracy                           0.92     46703
   macro avg       0.93      0.92      0.92     46703
weighted avg       0.93      0.92      0.92     46703

In [106]:
set_scores(results_test, "KNN", y_test, y_pred_test_Knn)
print("n_neighbors: " + str(Knn.n_neighbors))
print("leaf_size: " + str(Knn.leaf_size))
n_neighbors: 5
leaf_size: 30

KNN (GridSearchCV)¶

In [107]:
knn_grid_search = KNeighborsClassifier()

param_grid = {
    'n_neighbors': [2,3,4,5],
    'leaf_size': [1,2,3,4]
}

grid_search = GridSearchCV(
                estimator=knn_grid_search,
                param_grid=param_grid,
                cv=5,
                scoring="recall",
                n_jobs=-1
)

grid_search_result = grid_search.fit(X_train, y_train)

knn_grid_search = grid_search.fit(X_train, y_train)
print(grid_search.best_estimator_)
KNeighborsClassifier(leaf_size=1, n_neighbors=3)

Risultati Train Set¶

In [108]:
y_pred_train_knn_grid_search = knn_grid_search.predict(X_train)
In [109]:
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_knn_grid_search)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_knn_grid_search))
              precision    recall  f1-score   support

           0       1.00      0.92      0.96     47408
           1       0.93      1.00      0.96     47413

    accuracy                           0.96     94821
   macro avg       0.96      0.96      0.96     94821
weighted avg       0.96      0.96      0.96     94821

In [110]:
set_scores(results_train, "KNN_CV", y_train,y_pred_train_knn_grid_search)

Risultati Test set¶

In [111]:
y_pred_test_knn_grid_search = knn_grid_search.predict(X_test)
In [112]:
lista_predizioni.append(y_pred_test_knn_grid_search)
In [113]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_knn_grid_search)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_knn_grid_search))
              precision    recall  f1-score   support

           0       1.00      0.87      0.93     23354
           1       0.88      1.00      0.94     23349

    accuracy                           0.93     46703
   macro avg       0.94      0.93      0.93     46703
weighted avg       0.94      0.93      0.93     46703

In [114]:
set_scores(results_test, "KNN_CV", y_test, y_pred_test_knn_grid_search)

SGD¶

La discesa stocastica del gradiente (Stochastic Gradient Descent, SGD) è un algoritmo di ottimizzazione. L'obiettivo principale della SGD è minimizzare una funzione di perdita che misura l'errore tra le previsioni del modello e i valori di output desiderati. L'algoritmo fa ciò aggiornando iterativamente i parametri del modello in direzione opposta al gradiente della funzione di perdita rispetto a tali parametri. L'idea alla base della SGD è quella di suddividere il set di dati di addestramento in piccoli sottoinsiemi chiamati batch e di calcolare il gradiente della funzione di perdita sui singoli batch.

In [115]:
start = time.time()
SGD = SGDClassifier(max_iter=5000, random_state=42)
SGD = SGD.fit(X_train, y_train)
stop = time.time() - start
In [116]:
results = valuta_performance('SGD', SGD, X_train, y_train, results, stop)

Risutlati Train set¶

In [117]:
y_pred_train_SGD = SGD.predict(X_train)
In [118]:
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_SGD)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_SGD))
              precision    recall  f1-score   support

           0       0.97      0.67      0.79     47408
           1       0.75      0.98      0.85     47413

    accuracy                           0.83     94821
   macro avg       0.86      0.83      0.82     94821
weighted avg       0.86      0.83      0.82     94821

In [119]:
set_scores(results_train, "SGD", y_train, y_pred_train_SGD)

Risultati Test set¶

In [120]:
y_pred_test_SGD = SGD.predict(X_test)
In [121]:
lista_predizioni.append(y_pred_test_SGD)
In [122]:
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_SGD)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_SGD))
              precision    recall  f1-score   support

           0       0.97      0.67      0.79     23354
           1       0.75      0.98      0.85     23349

    accuracy                           0.82     46703
   macro avg       0.86      0.82      0.82     46703
weighted avg       0.86      0.82      0.82     46703

In [123]:
set_scores(results_test, "SGD", y_test, y_pred_test_SGD)
print("alpha: " + str(SGD.alpha))
alpha: 0.0001

SGD (GridSearchCV)¶

In [124]:
sgd_grid_search = SGDClassifier()

grid = {
    'loss': ['hinge', 'log_loss', 'squared_hinge', 'modified_huber', "perceptron"],
    'alpha': [0.00001, 0.0001, 0.001, 0.01],
    'penalty': ['l1', 'l2', 'elasticnet']
}
grid_search = GridSearchCV(
                estimator=sgd_grid_search,
                param_grid=grid,
                cv=5,
                scoring="recall",
                n_jobs=-1
)
grid_result = grid_search.fit(X_train, y_train)

sgd_grid_search = grid_search.best_estimator_
print(grid_search.best_estimator_)
SGDClassifier(loss='perceptron', penalty='elasticnet')

Risutlati Training set¶

In [125]:
y_pred_train_SGD_grid_search = sgd_grid_search.predict(X_train)
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_SGD_grid_search)
cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_SGD_grid_search))
              precision    recall  f1-score   support

           0       0.96      0.71      0.82     47408
           1       0.77      0.97      0.86     47413

    accuracy                           0.84     94821
   macro avg       0.86      0.84      0.84     94821
weighted avg       0.86      0.84      0.84     94821

In [126]:
set_scores(results_train,"SGD_CV",y_train,y_pred_train_SGD_grid_search)

Risultati Test set¶

In [127]:
y_pred_test_SGD_grid_search = sgd_grid_search.predict(X_test)
lista_predizioni.append(y_pred_test_SGD_grid_search)
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_SGD_grid_search)
cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_SGD_grid_search))
              precision    recall  f1-score   support

           0       0.96      0.71      0.81     23354
           1       0.77      0.97      0.86     23349

    accuracy                           0.84     46703
   macro avg       0.86      0.84      0.83     46703
weighted avg       0.86      0.84      0.83     46703

In [128]:
set_scores(results_test,"SGD_CV",y_test,y_pred_test_SGD_grid_search)

Voting¶

E' un modo di aggregare i classificatori, che può essere fatto in modo HARD o SOFT

In [129]:
voting = VotingClassifier(estimators=[('k-nearest neighbors', Knn),('regressione logistica', logreg),('classficatore bayesiano', nb)], voting ='hard')
voting.fit(X_train,y_train)
Out[129]:
VotingClassifier(estimators=[('k-nearest neighbors', KNeighborsClassifier()),
                             ('regressione logistica',
                              LogisticRegression(max_iter=500,
                                                 random_state=42)),
                             ('classficatore bayesiano', GaussianNB())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
VotingClassifier(estimators=[('k-nearest neighbors', KNeighborsClassifier()),
                             ('regressione logistica',
                              LogisticRegression(max_iter=500,
                                                 random_state=42)),
                             ('classficatore bayesiano', GaussianNB())])
KNeighborsClassifier()
LogisticRegression(max_iter=500, random_state=42)
GaussianNB()

Risultati Train Set¶

In [130]:
y_train_voting=voting.predict(X_train)
In [131]:
confusion_matrix = metrics.confusion_matrix(y_train, y_train_voting)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_train_voting))
              precision    recall  f1-score   support

           0       0.90      0.88      0.89     47408
           1       0.88      0.90      0.89     47413

    accuracy                           0.89     94821
   macro avg       0.89      0.89      0.89     94821
weighted avg       0.89      0.89      0.89     94821

In [132]:
set_scores(results_train,"Voting", y_train, y_train_voting)

Risultati Test Set¶

In [133]:
y_test_voting=voting.predict(X_test)
In [134]:
lista_predizioni.append(y_test_voting)
In [135]:
confusion_matrix = metrics.confusion_matrix(y_test, y_test_voting)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_test_voting))
              precision    recall  f1-score   support

           0       0.90      0.87      0.88     23354
           1       0.87      0.90      0.89     23349

    accuracy                           0.88     46703
   macro avg       0.88      0.88      0.88     46703
weighted avg       0.88      0.88      0.88     46703

In [136]:
set_scores(results_test,"Voting",y_test,y_test_voting)

Classificatori Ensemble¶

Random Forest¶

Con il random foresst è stata adottata la stessa politica usata con l'albero di decisione. Esso overfittava sul train set, si è deciso, quindi, di attuare pre-pruning fissando l'altezza massima del grafo a 13

In [137]:
forest = RandomForestClassifier(random_state=42, max_depth=13)
forest = forest.fit(X_train,y_train)

Risultati Train set¶

In [138]:
y_pred_train_forest = forest.predict(X_train)
lista_predizioni.append(y_pred_train_forest)
confusion_matrix = metrics.confusion_matrix(y_train,y_pred_train_forest)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_pred_train_forest))
              precision    recall  f1-score   support

           0       0.99      0.97      0.98     47408
           1       0.97      0.99      0.98     47413

    accuracy                           0.98     94821
   macro avg       0.98      0.98      0.98     94821
weighted avg       0.98      0.98      0.98     94821

In [139]:
set_scores(results_train, "Random_Forest", y_train, y_pred_train_forest)

Risultati Test set¶

In [140]:
y_pred_test_forest = forest.predict(X_test)
lista_predizioni.append(y_pred_test_forest)
confusion_matrix = metrics.confusion_matrix(y_test,y_pred_test_forest)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_pred_test_forest))
              precision    recall  f1-score   support

           0       0.97      0.95      0.96     23354
           1       0.95      0.97      0.96     23349

    accuracy                           0.96     46703
   macro avg       0.96      0.96      0.96     46703
weighted avg       0.96      0.96      0.96     46703

In [141]:
set_scores(results_test, "Random_Forest", y_test, y_pred_test_forest)
print("n_estimators: " + str(forest.n_estimators))
print("min_samples_split: " + str(forest.min_samples_split))
print("min_samples_leaf: " + str(forest.min_samples_leaf))
n_estimators: 100
min_samples_split: 2
min_samples_leaf: 1

Boosting¶

Il boosting è una tecnica di machine learning nell'ambito del data mining che combina diversi "weak learner" (tipicamente alberi decisionali) per creare un modello ensemble robusto. È una forma di apprendimento ensemble in cui i modelli vengono addestrati in sequenza, con ciascun modello successivo che si concentra sugli esempi classificati erroneamente dai modelli precedenti. L'obiettivo del boosting è migliorare le prestazioni predittive complessive adattando iterativamente i pesi o focalizzandosi su istanze difficili da predire. Verranno usati in particolare due algoritmi di boosting:

  • Adaboost: Regola i pesi delle istanze in base all'errore di classificazione di ciascun "weak learner". Assegna pesi più alti alle istanze classificate erroneamente e pesi più bassi a quelle classificate correttamente.
  • XGBoost: Algoritmo di machine learning che combina diversi modelli deboli per creare un modello predittivo più forte, focalizzandosi sugli errori commessi dai modelli precedenti e utilizzando un'elaborazione parallela per una maggiore efficienza.

Adaboost¶

In [142]:
adaboost = AdaBoostClassifier(random_state=42)
adaboost.fit(X_train, y_train)
Out[142]:
AdaBoostClassifier(random_state=42)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
AdaBoostClassifier(random_state=42)

Risultati Train set¶

In [143]:
y_train_AdaBoost = adaboost.predict(X_train)
In [144]:
confusion_matrix = metrics.confusion_matrix(y_train, y_train_AdaBoost)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_train, y_train_AdaBoost))
              precision    recall  f1-score   support

           0       0.92      0.90      0.91     47408
           1       0.90      0.92      0.91     47413

    accuracy                           0.91     94821
   macro avg       0.91      0.91      0.91     94821
weighted avg       0.91      0.91      0.91     94821

In [145]:
set_scores(results_train,"AdaBoost",y_train,y_train_AdaBoost)

Risultati Test set¶

In [146]:
y_test_AdaBoost = adaboost.predict(X_test)
In [147]:
lista_predizioni.append(y_test_AdaBoost)
In [148]:
confusion_matrix = metrics.confusion_matrix(y_test, y_test_AdaBoost)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

print(classification_report(y_test, y_test_AdaBoost))
              precision    recall  f1-score   support

           0       0.92      0.90      0.91     23354
           1       0.90      0.92      0.91     23349

    accuracy                           0.91     46703
   macro avg       0.91      0.91      0.91     46703
weighted avg       0.91      0.91      0.91     46703

In [149]:
set_scores(results_test,"AdaBoost",y_test,y_test_AdaBoost)

XGBoost¶

In [150]:
xgb = XGBClassifier(random_state=42, nthread=8)
xgb.fit(X_train,y_train)
Out[150]:
XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric=None, feature_types=None,
              gamma=None, gpu_id=None, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=None, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=None, max_leaves=None,
              min_child_weight=None, missing=nan, monotone_constraints=None,
              n_estimators=100, n_jobs=None, nthread=8, num_parallel_tree=None,
              predictor=None, ...)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric=None, feature_types=None,
              gamma=None, gpu_id=None, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=None, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=None, max_leaves=None,
              min_child_weight=None, missing=nan, monotone_constraints=None,
              n_estimators=100, n_jobs=None, nthread=8, num_parallel_tree=None,
              predictor=None, ...)

Risultati Train set¶

In [151]:
y_train_xgb = xgb.predict(X_train)
In [152]:
confusion_matrix = metrics.confusion_matrix(y_train, y_train_xgb)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix=confusion_matrix, display_labels=[0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')

plt.show()

print(classification_report(y_train, y_train_xgb))
              precision    recall  f1-score   support

           0       0.98      0.99      0.99     47408
           1       0.99      0.98      0.99     47413

    accuracy                           0.99     94821
   macro avg       0.99      0.99      0.99     94821
weighted avg       0.99      0.99      0.99     94821

In [153]:
set_scores(results_train,"XGBoost",y_train,y_train_xgb)

Risultati Test set¶

In [154]:
y_test_xgb = xgb.predict(X_test)
In [155]:
lista_predizioni.append(y_test_xgb)
In [156]:
confusion_matrix = metrics.confusion_matrix(y_test, y_test_xgb)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')

plt.show()

print(classification_report(y_test, y_test_xgb))
              precision    recall  f1-score   support

           0       0.97      0.98      0.97     23354
           1       0.98      0.97      0.97     23349

    accuracy                           0.97     46703
   macro avg       0.97      0.97      0.97     46703
weighted avg       0.97      0.97      0.97     46703

In [157]:
set_scores(results_test,"XGBoost",y_test,y_test_xgb)

Visualizzazione dei risultati¶

Viene di seguito mostrata la classifica dei migliori classificatori per ciascuna metrica.

In [158]:
results_train
Out[158]:
accuracy balanced_accuracy precision w_precision recall w_recall f1
Albero_Decisionale 0.983348 0.983347 0.980279 0.983367 0.986544 0.983348 0.983401
Albero_Decisionale_CV 0.967708 0.967707 0.964088 0.967736 0.971611 0.967708 0.967835
Naive_Bayes 0.841438 0.841439 0.85423 0.841884 0.823403 0.841438 0.838533
Naive_Bayes_CV 0.841438 0.841439 0.85423 0.841884 0.823403 0.841438 0.838533
LogReg 0.863501 0.8635 0.855009 0.863709 0.875477 0.863501 0.865122
LogReg_CV 0.870514 0.870513 0.863408 0.870656 0.880307 0.870514 0.871776
KNN 0.942987 0.942984 0.899253 0.948368 0.997764 0.942987 0.945951
KNN_CV 0.960325 0.960323 0.927289 0.963094 0.998988 0.960325 0.961804
SGD 0.82526 0.825252 0.749684 0.858076 0.976631 0.82526 0.84824
SGD_CV 0.840057 0.84005 0.770724 0.863933 0.968131 0.840057 0.858222
Voting 0.889349 0.889349 0.880198 0.889575 0.901398 0.889349 0.890672
Random_Forest 0.978792 0.978791 0.968255 0.979034 0.990045 0.978792 0.979029
AdaBoost 0.90905 0.909049 0.899862 0.909266 0.920549 0.90905 0.910088
XGBoost 0.986986 0.986986 0.990567 0.987012 0.983338 0.986986 0.986939
In [159]:
results_test
Out[159]:
accuracy balanced_accuracy precision w_precision recall w_recall f1
Albero_Decisionale 0.947412 0.947413 0.943833 0.947441 0.951433 0.947412 0.947618
Albero_Decisionale_CV 0.967708 0.967707 0.964088 0.967736 0.971611 0.967708 0.967835
Naive_Bayes 0.839689 0.839688 0.849414 0.839953 0.825731 0.839689 0.837405
Naive_Bayes_CV 0.839689 0.839688 0.849414 0.839953 0.825731 0.839689 0.837405
LogReg 0.862685 0.862687 0.853422 0.862934 0.875755 0.862685 0.864444
LogReg_CV 0.87033 0.870331 0.863039 0.870479 0.880337 0.87033 0.871602
KNN 0.922018 0.922026 0.868589 0.931072 0.994475 0.922018 0.927279
KNN_CV 0.932895 0.932902 0.883892 0.940062 0.996702 0.932895 0.936914
SGD 0.822624 0.82264 0.746668 0.856397 0.97653 0.822624 0.846268
SGD_CV 0.836563 0.836577 0.766626 0.861415 0.967665 0.836563 0.855493
Voting 0.883733 0.883735 0.870457 0.884226 0.901623 0.883733 0.885766
Random_Forest 0.959146 0.959147 0.948613 0.959399 0.970877 0.959146 0.959616
AdaBoost 0.907115 0.907116 0.899139 0.907278 0.917084 0.907115 0.908023
XGBoost 0.973877 0.973877 0.978175 0.973916 0.969378 0.973877 0.973757

Mostriamo, inoltre, una stampa grafica dei risultati ottenuti dai vari clasificatori, elecandoli modello per modello in base al tipo di metrica che stiamo considerando. Verranno mostrati, dunque, i risultati ottenuti negli score di: accuratezza, precisione, F1 score e recall.

Accuracy¶

In [160]:
data_frame = results_test.sort_values('accuracy', ascending=False)

plt.figure(figsize=(27,10))
plt.bar(data_frame.index, data_frame['accuracy'])
fig.show()
plt.ylabel('Accuracy',  fontsize=20)
plt.xlabel('Models',  fontsize=20)
ax = plt.title('Accuracy dei modelli',  fontsize=25)

Precision¶

In [161]:
data_frame = results_test.sort_values('precision', ascending=False)

plt.figure(figsize=(27,10))
plt.bar(data_frame.index, data_frame['precision'])
fig.show()
plt.ylabel('Precision',  fontsize=20)
plt.xlabel('Models',  fontsize=20)
ax = plt.title('Precision dei modelli',  fontsize=25)

Recall¶

In [162]:
data_frame = results_test.sort_values('recall', ascending=False)

plt.figure(figsize=(27,10))
plt.bar(data_frame.index, data_frame['recall'])
fig.show()
plt.ylabel('Recall',  fontsize=20)
plt.xlabel('Models',  fontsize=20)
ax = plt.title('Recall dei modelli',  fontsize=25)

F1 Score¶

In [163]:
data_frame = results_test.sort_values('f1', ascending=False)

plt.figure(figsize=(27,10))
plt.bar(data_frame.index, data_frame['f1'])
fig.show()
plt.ylabel('F1',  fontsize=20)
plt.xlabel('Models',  fontsize=20)
ax = plt.title('F1 score dei modelli',  fontsize=25)

Rete neurale¶

Una rete neurale è un modello matematico ispirato al funzionamento del cervello umano, composto da un insieme di unità chiamate neuroni artificiali o nodi. Questi neuroni sono organizzati in strati (layer) e comunicano tra loro attraverso connessioni pesate.

In [164]:
image2 = 'rete_neurale.png'
Image(filename=os.path.join(image2),width=350)
Out[164]:

Il funzionamento di una rete neurale può essere diviso in diverse fasi:

  • Input: L'input viene fornito alla rete neurale e può consistere in una serie di valori o un vettore di caratteristiche. Ad esempio, in un problema di riconoscimento di immagini, l'input può essere l'immagine stessa rappresentata da una matrice di pixel.
  • Pesatura e somma pesata: Ogni connessione tra i neuroni ha un peso associato che determina l'importanza di quella connessione. L'input viene moltiplicato per il peso delle connessioni corrispondenti e i risultati vengono sommati per calcolare un valore di attivazione per ciascun neurone.
  • Funzione di attivazione: Dopo la somma pesata, viene applicata una funzione di attivazione al valore di attivazione calcolato per introdurre non linearità nella rete. La funzione di attivazione può essere, ad esempio, la funzione sigmoide, la funzione ReLU (Rectified Linear Unit) o la funzione softmax.
  • Propagazione: L'output dei neuroni nel primo strato viene passato come input ai neuroni nel secondo strato e così via, attraverso tutti gli strati della rete. Questo processo è noto come propagazione in avanti o forward propagation.
  • Funzione di perdita: Alla fine del processo di propagazione in avanti, viene calcolata una misura dell'errore o della perdita tra l'output prodotto dalla rete neurale e il valore di output desiderato. Questa misura di errore può essere calcolata utilizzando diverse funzioni di perdita, come l'errore quadratico medio o l'entropia incrociata.
  • Retropropagazione: Una volta calcolata la funzione di perdita, la rete neurale utilizza un algoritmo chiamato retropropagazione dell'errore (backpropagation) per aggiornare i pesi delle connessioni all'interno della rete. Questo processo avviene in modo iterativo, regolando i pesi in modo che la rete minimizzi l'errore di predizione.
  • Ottimizzazione: Durante l'addestramento, la rete neurale cerca di ottimizzare i pesi delle connessioni per migliorare le sue prestazioni sul set di dati di addestramento. Ciò può comportare l'utilizzo di algoritmi di ottimizzazione come la discesa del gradiente.

Sono state implementate due reti neurali:

  • ANN scritta in maniera custom utilizzando tensorflow e keras per la realizzazione
  • MLP (Multi-Layer Perceptron)

ANN¶

In [165]:
df_neural_X = df_final.drop('RainTomorrow', axis=1)
df_neural_Y = df_final['RainTomorrow']
In [166]:
from keras.callbacks import EarlyStopping

#Splitting in train test, test set, validation set
X_train_rete, X_test_rete, y_train_rete, y_test_rete = train_test_split(df_neural_X, df_neural_Y, test_size=0.1, random_state=42)
X_train_val, X_val, y_train_val, y_val = train_test_split(X_train_rete, y_train_rete, test_size=0.22, random_state=42)

#Creazione dello scaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_val)
X_val_scaled = scaler.transform(X_val)
X_test_scaled = scaler.transform(X_test_rete)

# Definisco l'early stopping
early_stopping = EarlyStopping(monitor='recall', patience=50, mode='max', verbose=1)

#Creazione del modello
model = Sequential()
model.add(Dense(32, activation='relu', input_shape=(X_train_scaled.shape[1],)))
model.add(Dense(64, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid', kernel_initializer='normal'))

#Compilazione del modello
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[tf.keras.metrics.Recall(), tf.keras.metrics.Precision(), tf.keras.metrics.BinaryAccuracy()])

#Addestramento del modello con validation
history = model.fit(X_train_scaled, y_train_val, validation_data=(X_val_scaled, y_val), epochs=350, batch_size=2048)

#valutazione sul test set
test_loss, test_recall, test_precision, test_balanced_accuracy = model.evaluate(X_test_scaled, y_test_rete, verbose=1, callbacks=[early_stopping])
print("Test recall: ", test_recall)

# Generate predictions on the test set
y_pred_probability = model.predict(X_test_scaled)
threshold = 0.5
y_pred = (y_pred_probability > threshold).astype("int32")
Epoch 1/350
49/49 [==============================] - 4s 26ms/step - loss: 0.4930 - recall: 0.8166 - precision: 0.8361 - binary_accuracy: 0.8285 - val_loss: 0.3163 - val_recall: 0.8971 - val_precision: 0.8345 - val_binary_accuracy: 0.8595
Epoch 2/350
49/49 [==============================] - 1s 13ms/step - loss: 0.2952 - recall: 0.8899 - precision: 0.8577 - binary_accuracy: 0.8713 - val_loss: 0.2817 - val_recall: 0.9014 - val_precision: 0.8614 - val_binary_accuracy: 0.8781
Epoch 3/350
49/49 [==============================] - 1s 13ms/step - loss: 0.2712 - recall: 0.9018 - precision: 0.8699 - binary_accuracy: 0.8836 - val_loss: 0.2633 - val_recall: 0.9165 - val_precision: 0.8703 - val_binary_accuracy: 0.8899
Epoch 4/350
49/49 [==============================] - 1s 13ms/step - loss: 0.2521 - recall: 0.9132 - precision: 0.8839 - binary_accuracy: 0.8967 - val_loss: 0.2427 - val_recall: 0.9197 - val_precision: 0.8902 - val_binary_accuracy: 0.9031
Epoch 5/350
49/49 [==============================] - 1s 13ms/step - loss: 0.2330 - recall: 0.9220 - precision: 0.8974 - binary_accuracy: 0.9084 - val_loss: 0.2268 - val_recall: 0.9235 - val_precision: 0.9042 - val_binary_accuracy: 0.9128
Epoch 6/350
49/49 [==============================] - 1s 13ms/step - loss: 0.2165 - recall: 0.9272 - precision: 0.9093 - binary_accuracy: 0.9174 - val_loss: 0.2104 - val_recall: 0.9381 - val_precision: 0.9103 - val_binary_accuracy: 0.9228
Epoch 7/350
49/49 [==============================] - 1s 13ms/step - loss: 0.2018 - recall: 0.9338 - precision: 0.9166 - binary_accuracy: 0.9245 - val_loss: 0.1999 - val_recall: 0.9234 - val_precision: 0.9298 - val_binary_accuracy: 0.9268
Epoch 8/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1913 - recall: 0.9364 - precision: 0.9223 - binary_accuracy: 0.9289 - val_loss: 0.1901 - val_recall: 0.9284 - val_precision: 0.9319 - val_binary_accuracy: 0.9303
Epoch 9/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1845 - recall: 0.9375 - precision: 0.9256 - binary_accuracy: 0.9312 - val_loss: 0.1830 - val_recall: 0.9391 - val_precision: 0.9279 - val_binary_accuracy: 0.9331
Epoch 10/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1792 - recall: 0.9373 - precision: 0.9279 - binary_accuracy: 0.9323 - val_loss: 0.1779 - val_recall: 0.9272 - val_precision: 0.9397 - val_binary_accuracy: 0.9338
Epoch 11/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1723 - recall: 0.9404 - precision: 0.9309 - binary_accuracy: 0.9354 - val_loss: 0.1703 - val_recall: 0.9475 - val_precision: 0.9292 - val_binary_accuracy: 0.9376
Epoch 12/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1672 - recall: 0.9421 - precision: 0.9320 - binary_accuracy: 0.9368 - val_loss: 0.1669 - val_recall: 0.9511 - val_precision: 0.9277 - val_binary_accuracy: 0.9385
Epoch 13/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1630 - recall: 0.9440 - precision: 0.9342 - binary_accuracy: 0.9388 - val_loss: 0.1654 - val_recall: 0.9567 - val_precision: 0.9225 - val_binary_accuracy: 0.9381
Epoch 14/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1589 - recall: 0.9458 - precision: 0.9343 - binary_accuracy: 0.9397 - val_loss: 0.1600 - val_recall: 0.9443 - val_precision: 0.9376 - val_binary_accuracy: 0.9408
Epoch 15/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1562 - recall: 0.9471 - precision: 0.9355 - binary_accuracy: 0.9410 - val_loss: 0.1640 - val_recall: 0.9615 - val_precision: 0.9197 - val_binary_accuracy: 0.9388
Epoch 16/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1537 - recall: 0.9489 - precision: 0.9355 - binary_accuracy: 0.9418 - val_loss: 0.1552 - val_recall: 0.9620 - val_precision: 0.9269 - val_binary_accuracy: 0.9430
Epoch 17/350
49/49 [==============================] - 1s 14ms/step - loss: 0.1504 - recall: 0.9488 - precision: 0.9365 - binary_accuracy: 0.9423 - val_loss: 0.1514 - val_recall: 0.9458 - val_precision: 0.9410 - val_binary_accuracy: 0.9433
Epoch 18/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1471 - recall: 0.9518 - precision: 0.9372 - binary_accuracy: 0.9441 - val_loss: 0.1491 - val_recall: 0.9584 - val_precision: 0.9330 - val_binary_accuracy: 0.9448
Epoch 19/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1450 - recall: 0.9524 - precision: 0.9379 - binary_accuracy: 0.9447 - val_loss: 0.1512 - val_recall: 0.9649 - val_precision: 0.9270 - val_binary_accuracy: 0.9444
Epoch 20/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1426 - recall: 0.9541 - precision: 0.9385 - binary_accuracy: 0.9458 - val_loss: 0.1458 - val_recall: 0.9565 - val_precision: 0.9365 - val_binary_accuracy: 0.9458
Epoch 21/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1405 - recall: 0.9540 - precision: 0.9393 - binary_accuracy: 0.9462 - val_loss: 0.1461 - val_recall: 0.9454 - val_precision: 0.9451 - val_binary_accuracy: 0.9453
Epoch 22/350
49/49 [==============================] - 1s 14ms/step - loss: 0.1386 - recall: 0.9549 - precision: 0.9398 - binary_accuracy: 0.9469 - val_loss: 0.1434 - val_recall: 0.9538 - val_precision: 0.9401 - val_binary_accuracy: 0.9465
Epoch 23/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1362 - recall: 0.9557 - precision: 0.9410 - binary_accuracy: 0.9479 - val_loss: 0.1431 - val_recall: 0.9593 - val_precision: 0.9348 - val_binary_accuracy: 0.9461
Epoch 24/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1363 - recall: 0.9561 - precision: 0.9399 - binary_accuracy: 0.9475 - val_loss: 0.1469 - val_recall: 0.9686 - val_precision: 0.9254 - val_binary_accuracy: 0.9453
Epoch 25/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1340 - recall: 0.9581 - precision: 0.9404 - binary_accuracy: 0.9488 - val_loss: 0.1415 - val_recall: 0.9508 - val_precision: 0.9428 - val_binary_accuracy: 0.9466
Epoch 26/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1317 - recall: 0.9590 - precision: 0.9416 - binary_accuracy: 0.9498 - val_loss: 0.1494 - val_recall: 0.9328 - val_precision: 0.9513 - val_binary_accuracy: 0.9425
Epoch 27/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1312 - recall: 0.9591 - precision: 0.9417 - binary_accuracy: 0.9499 - val_loss: 0.1407 - val_recall: 0.9451 - val_precision: 0.9480 - val_binary_accuracy: 0.9466
Epoch 28/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1291 - recall: 0.9594 - precision: 0.9418 - binary_accuracy: 0.9501 - val_loss: 0.1363 - val_recall: 0.9629 - val_precision: 0.9384 - val_binary_accuracy: 0.9499
Epoch 29/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1272 - recall: 0.9600 - precision: 0.9430 - binary_accuracy: 0.9510 - val_loss: 0.1451 - val_recall: 0.9747 - val_precision: 0.9199 - val_binary_accuracy: 0.9449
Epoch 30/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1275 - recall: 0.9601 - precision: 0.9422 - binary_accuracy: 0.9506 - val_loss: 0.1371 - val_recall: 0.9706 - val_precision: 0.9317 - val_binary_accuracy: 0.9497
Epoch 31/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1257 - recall: 0.9615 - precision: 0.9424 - binary_accuracy: 0.9514 - val_loss: 0.1347 - val_recall: 0.9538 - val_precision: 0.9455 - val_binary_accuracy: 0.9494
Epoch 32/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1243 - recall: 0.9616 - precision: 0.9435 - binary_accuracy: 0.9521 - val_loss: 0.1342 - val_recall: 0.9590 - val_precision: 0.9412 - val_binary_accuracy: 0.9495
Epoch 33/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1233 - recall: 0.9628 - precision: 0.9437 - binary_accuracy: 0.9527 - val_loss: 0.1342 - val_recall: 0.9570 - val_precision: 0.9446 - val_binary_accuracy: 0.9504
Epoch 34/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1222 - recall: 0.9624 - precision: 0.9438 - binary_accuracy: 0.9526 - val_loss: 0.1326 - val_recall: 0.9629 - val_precision: 0.9405 - val_binary_accuracy: 0.9510
Epoch 35/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1219 - recall: 0.9641 - precision: 0.9444 - binary_accuracy: 0.9537 - val_loss: 0.1344 - val_recall: 0.9510 - val_precision: 0.9489 - val_binary_accuracy: 0.9499
Epoch 36/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1199 - recall: 0.9647 - precision: 0.9449 - binary_accuracy: 0.9543 - val_loss: 0.1314 - val_recall: 0.9643 - val_precision: 0.9391 - val_binary_accuracy: 0.9509
Epoch 37/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1188 - recall: 0.9647 - precision: 0.9456 - binary_accuracy: 0.9546 - val_loss: 0.1317 - val_recall: 0.9605 - val_precision: 0.9406 - val_binary_accuracy: 0.9499
Epoch 38/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1182 - recall: 0.9653 - precision: 0.9451 - binary_accuracy: 0.9546 - val_loss: 0.1302 - val_recall: 0.9605 - val_precision: 0.9430 - val_binary_accuracy: 0.9512
Epoch 39/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1171 - recall: 0.9648 - precision: 0.9454 - binary_accuracy: 0.9546 - val_loss: 0.1316 - val_recall: 0.9709 - val_precision: 0.9346 - val_binary_accuracy: 0.9515
Epoch 40/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1206 - recall: 0.9644 - precision: 0.9430 - binary_accuracy: 0.9531 - val_loss: 0.1300 - val_recall: 0.9643 - val_precision: 0.9415 - val_binary_accuracy: 0.9522
Epoch 41/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1185 - recall: 0.9649 - precision: 0.9441 - binary_accuracy: 0.9539 - val_loss: 0.1297 - val_recall: 0.9587 - val_precision: 0.9438 - val_binary_accuracy: 0.9508
Epoch 42/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1146 - recall: 0.9664 - precision: 0.9471 - binary_accuracy: 0.9563 - val_loss: 0.1283 - val_recall: 0.9660 - val_precision: 0.9397 - val_binary_accuracy: 0.9520
Epoch 43/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1136 - recall: 0.9673 - precision: 0.9466 - binary_accuracy: 0.9564 - val_loss: 0.1282 - val_recall: 0.9624 - val_precision: 0.9433 - val_binary_accuracy: 0.9523
Epoch 44/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1133 - recall: 0.9674 - precision: 0.9472 - binary_accuracy: 0.9567 - val_loss: 0.1277 - val_recall: 0.9677 - val_precision: 0.9382 - val_binary_accuracy: 0.9520
Epoch 45/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1123 - recall: 0.9674 - precision: 0.9474 - binary_accuracy: 0.9569 - val_loss: 0.1327 - val_recall: 0.9766 - val_precision: 0.9281 - val_binary_accuracy: 0.9505
Epoch 46/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1118 - recall: 0.9679 - precision: 0.9474 - binary_accuracy: 0.9571 - val_loss: 0.1265 - val_recall: 0.9597 - val_precision: 0.9464 - val_binary_accuracy: 0.9526
Epoch 47/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1117 - recall: 0.9672 - precision: 0.9478 - binary_accuracy: 0.9570 - val_loss: 0.1281 - val_recall: 0.9718 - val_precision: 0.9349 - val_binary_accuracy: 0.9521
Epoch 48/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1112 - recall: 0.9691 - precision: 0.9468 - binary_accuracy: 0.9574 - val_loss: 0.1279 - val_recall: 0.9659 - val_precision: 0.9408 - val_binary_accuracy: 0.9526
Epoch 49/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1103 - recall: 0.9686 - precision: 0.9482 - binary_accuracy: 0.9579 - val_loss: 0.1276 - val_recall: 0.9599 - val_precision: 0.9455 - val_binary_accuracy: 0.9523
Epoch 50/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1086 - recall: 0.9697 - precision: 0.9482 - binary_accuracy: 0.9584 - val_loss: 0.1246 - val_recall: 0.9627 - val_precision: 0.9450 - val_binary_accuracy: 0.9534
Epoch 51/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1089 - recall: 0.9695 - precision: 0.9486 - binary_accuracy: 0.9585 - val_loss: 0.1282 - val_recall: 0.9589 - val_precision: 0.9450 - val_binary_accuracy: 0.9515
Epoch 52/350
49/49 [==============================] - 1s 12ms/step - loss: 0.1095 - recall: 0.9687 - precision: 0.9476 - binary_accuracy: 0.9576 - val_loss: 0.1267 - val_recall: 0.9680 - val_precision: 0.9390 - val_binary_accuracy: 0.9526
Epoch 53/350
49/49 [==============================] - 1s 14ms/step - loss: 0.1072 - recall: 0.9704 - precision: 0.9488 - binary_accuracy: 0.9591 - val_loss: 0.1258 - val_recall: 0.9648 - val_precision: 0.9432 - val_binary_accuracy: 0.9534
Epoch 54/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1066 - recall: 0.9710 - precision: 0.9493 - binary_accuracy: 0.9596 - val_loss: 0.1264 - val_recall: 0.9640 - val_precision: 0.9437 - val_binary_accuracy: 0.9533
Epoch 55/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1072 - recall: 0.9695 - precision: 0.9485 - binary_accuracy: 0.9585 - val_loss: 0.1316 - val_recall: 0.9463 - val_precision: 0.9536 - val_binary_accuracy: 0.9501
Epoch 56/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1072 - recall: 0.9700 - precision: 0.9492 - binary_accuracy: 0.9591 - val_loss: 0.1240 - val_recall: 0.9665 - val_precision: 0.9417 - val_binary_accuracy: 0.9533
Epoch 57/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1046 - recall: 0.9710 - precision: 0.9500 - binary_accuracy: 0.9600 - val_loss: 0.1251 - val_recall: 0.9712 - val_precision: 0.9380 - val_binary_accuracy: 0.9535
Epoch 58/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1059 - recall: 0.9707 - precision: 0.9498 - binary_accuracy: 0.9597 - val_loss: 0.1236 - val_recall: 0.9665 - val_precision: 0.9442 - val_binary_accuracy: 0.9547
Epoch 59/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1047 - recall: 0.9717 - precision: 0.9503 - binary_accuracy: 0.9605 - val_loss: 0.1292 - val_recall: 0.9739 - val_precision: 0.9316 - val_binary_accuracy: 0.9512
Epoch 60/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1050 - recall: 0.9712 - precision: 0.9496 - binary_accuracy: 0.9599 - val_loss: 0.1240 - val_recall: 0.9598 - val_precision: 0.9500 - val_binary_accuracy: 0.9546
Epoch 61/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1046 - recall: 0.9714 - precision: 0.9498 - binary_accuracy: 0.9601 - val_loss: 0.1239 - val_recall: 0.9655 - val_precision: 0.9440 - val_binary_accuracy: 0.9541
Epoch 62/350
49/49 [==============================] - 1s 14ms/step - loss: 0.1034 - recall: 0.9722 - precision: 0.9504 - binary_accuracy: 0.9608 - val_loss: 0.1234 - val_recall: 0.9683 - val_precision: 0.9422 - val_binary_accuracy: 0.9544
Epoch 63/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1028 - recall: 0.9722 - precision: 0.9503 - binary_accuracy: 0.9607 - val_loss: 0.1233 - val_recall: 0.9662 - val_precision: 0.9454 - val_binary_accuracy: 0.9552
Epoch 64/350
49/49 [==============================] - 1s 14ms/step - loss: 0.1013 - recall: 0.9729 - precision: 0.9509 - binary_accuracy: 0.9613 - val_loss: 0.1226 - val_recall: 0.9708 - val_precision: 0.9413 - val_binary_accuracy: 0.9551
Epoch 65/350
49/49 [==============================] - 1s 14ms/step - loss: 0.1038 - recall: 0.9719 - precision: 0.9501 - binary_accuracy: 0.9604 - val_loss: 0.1274 - val_recall: 0.9598 - val_precision: 0.9459 - val_binary_accuracy: 0.9524
Epoch 66/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1015 - recall: 0.9732 - precision: 0.9512 - binary_accuracy: 0.9617 - val_loss: 0.1245 - val_recall: 0.9690 - val_precision: 0.9406 - val_binary_accuracy: 0.9539
Epoch 67/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1009 - recall: 0.9726 - precision: 0.9512 - binary_accuracy: 0.9614 - val_loss: 0.1245 - val_recall: 0.9541 - val_precision: 0.9516 - val_binary_accuracy: 0.9528
Epoch 68/350
49/49 [==============================] - 1s 13ms/step - loss: 0.1004 - recall: 0.9730 - precision: 0.9514 - binary_accuracy: 0.9617 - val_loss: 0.1264 - val_recall: 0.9772 - val_precision: 0.9339 - val_binary_accuracy: 0.9540
Epoch 69/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0995 - recall: 0.9735 - precision: 0.9523 - binary_accuracy: 0.9624 - val_loss: 0.1236 - val_recall: 0.9675 - val_precision: 0.9440 - val_binary_accuracy: 0.9551
Epoch 70/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0983 - recall: 0.9743 - precision: 0.9532 - binary_accuracy: 0.9633 - val_loss: 0.1234 - val_recall: 0.9720 - val_precision: 0.9396 - val_binary_accuracy: 0.9548
Epoch 71/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0983 - recall: 0.9737 - precision: 0.9528 - binary_accuracy: 0.9628 - val_loss: 0.1370 - val_recall: 0.9851 - val_precision: 0.9224 - val_binary_accuracy: 0.9511
Epoch 72/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0996 - recall: 0.9729 - precision: 0.9514 - binary_accuracy: 0.9616 - val_loss: 0.1247 - val_recall: 0.9566 - val_precision: 0.9512 - val_binary_accuracy: 0.9538
Epoch 73/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0999 - recall: 0.9732 - precision: 0.9526 - binary_accuracy: 0.9624 - val_loss: 0.1227 - val_recall: 0.9687 - val_precision: 0.9426 - val_binary_accuracy: 0.9549
Epoch 74/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0984 - recall: 0.9747 - precision: 0.9516 - binary_accuracy: 0.9626 - val_loss: 0.1234 - val_recall: 0.9751 - val_precision: 0.9376 - val_binary_accuracy: 0.9551
Epoch 75/350
49/49 [==============================] - 1s 12ms/step - loss: 0.0970 - recall: 0.9743 - precision: 0.9527 - binary_accuracy: 0.9630 - val_loss: 0.1215 - val_recall: 0.9718 - val_precision: 0.9424 - val_binary_accuracy: 0.9562
Epoch 76/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0982 - recall: 0.9745 - precision: 0.9520 - binary_accuracy: 0.9628 - val_loss: 0.1258 - val_recall: 0.9481 - val_precision: 0.9571 - val_binary_accuracy: 0.9528
Epoch 77/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0973 - recall: 0.9737 - precision: 0.9532 - binary_accuracy: 0.9630 - val_loss: 0.1218 - val_recall: 0.9693 - val_precision: 0.9454 - val_binary_accuracy: 0.9566
Epoch 78/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0957 - recall: 0.9752 - precision: 0.9539 - binary_accuracy: 0.9641 - val_loss: 0.1232 - val_recall: 0.9595 - val_precision: 0.9500 - val_binary_accuracy: 0.9545
Epoch 79/350
49/49 [==============================] - 1s 12ms/step - loss: 0.0955 - recall: 0.9755 - precision: 0.9540 - binary_accuracy: 0.9643 - val_loss: 0.1213 - val_recall: 0.9733 - val_precision: 0.9419 - val_binary_accuracy: 0.9566
Epoch 80/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0944 - recall: 0.9773 - precision: 0.9534 - binary_accuracy: 0.9648 - val_loss: 0.1237 - val_recall: 0.9542 - val_precision: 0.9539 - val_binary_accuracy: 0.9540
Epoch 81/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0959 - recall: 0.9753 - precision: 0.9534 - binary_accuracy: 0.9639 - val_loss: 0.1268 - val_recall: 0.9822 - val_precision: 0.9313 - val_binary_accuracy: 0.9548
Epoch 82/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0948 - recall: 0.9758 - precision: 0.9536 - binary_accuracy: 0.9642 - val_loss: 0.1195 - val_recall: 0.9717 - val_precision: 0.9450 - val_binary_accuracy: 0.9576
Epoch 83/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0935 - recall: 0.9763 - precision: 0.9547 - binary_accuracy: 0.9650 - val_loss: 0.1299 - val_recall: 0.9432 - val_precision: 0.9577 - val_binary_accuracy: 0.9508
Epoch 84/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0948 - recall: 0.9750 - precision: 0.9538 - binary_accuracy: 0.9639 - val_loss: 0.1210 - val_recall: 0.9722 - val_precision: 0.9431 - val_binary_accuracy: 0.9568
Epoch 85/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0931 - recall: 0.9772 - precision: 0.9537 - binary_accuracy: 0.9649 - val_loss: 0.1214 - val_recall: 0.9708 - val_precision: 0.9428 - val_binary_accuracy: 0.9559
Epoch 86/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0925 - recall: 0.9769 - precision: 0.9552 - binary_accuracy: 0.9656 - val_loss: 0.1227 - val_recall: 0.9732 - val_precision: 0.9409 - val_binary_accuracy: 0.9560
Epoch 87/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0943 - recall: 0.9756 - precision: 0.9543 - binary_accuracy: 0.9645 - val_loss: 0.1193 - val_recall: 0.9721 - val_precision: 0.9435 - val_binary_accuracy: 0.9569
Epoch 88/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0920 - recall: 0.9769 - precision: 0.9551 - binary_accuracy: 0.9655 - val_loss: 0.1212 - val_recall: 0.9646 - val_precision: 0.9488 - val_binary_accuracy: 0.9562
Epoch 89/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0933 - recall: 0.9766 - precision: 0.9543 - binary_accuracy: 0.9650 - val_loss: 0.1207 - val_recall: 0.9734 - val_precision: 0.9428 - val_binary_accuracy: 0.9571
Epoch 90/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0916 - recall: 0.9772 - precision: 0.9554 - binary_accuracy: 0.9658 - val_loss: 0.1252 - val_recall: 0.9768 - val_precision: 0.9386 - val_binary_accuracy: 0.9564
Epoch 91/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0934 - recall: 0.9760 - precision: 0.9546 - binary_accuracy: 0.9648 - val_loss: 0.1235 - val_recall: 0.9754 - val_precision: 0.9363 - val_binary_accuracy: 0.9545
Epoch 92/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0923 - recall: 0.9773 - precision: 0.9549 - binary_accuracy: 0.9656 - val_loss: 0.1199 - val_recall: 0.9755 - val_precision: 0.9405 - val_binary_accuracy: 0.9569
Epoch 93/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0904 - recall: 0.9774 - precision: 0.9559 - binary_accuracy: 0.9662 - val_loss: 0.1220 - val_recall: 0.9757 - val_precision: 0.9376 - val_binary_accuracy: 0.9554
Epoch 94/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0901 - recall: 0.9774 - precision: 0.9553 - binary_accuracy: 0.9659 - val_loss: 0.1185 - val_recall: 0.9713 - val_precision: 0.9470 - val_binary_accuracy: 0.9585
Epoch 95/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0891 - recall: 0.9779 - precision: 0.9562 - binary_accuracy: 0.9666 - val_loss: 0.1214 - val_recall: 0.9642 - val_precision: 0.9491 - val_binary_accuracy: 0.9562
Epoch 96/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0884 - recall: 0.9786 - precision: 0.9572 - binary_accuracy: 0.9675 - val_loss: 0.1230 - val_recall: 0.9620 - val_precision: 0.9504 - val_binary_accuracy: 0.9559
Epoch 97/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0904 - recall: 0.9774 - precision: 0.9556 - binary_accuracy: 0.9660 - val_loss: 0.1222 - val_recall: 0.9791 - val_precision: 0.9376 - val_binary_accuracy: 0.9570
Epoch 98/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0894 - recall: 0.9778 - precision: 0.9563 - binary_accuracy: 0.9666 - val_loss: 0.1178 - val_recall: 0.9707 - val_precision: 0.9458 - val_binary_accuracy: 0.9576
Epoch 99/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0877 - recall: 0.9786 - precision: 0.9565 - binary_accuracy: 0.9671 - val_loss: 0.1216 - val_recall: 0.9668 - val_precision: 0.9485 - val_binary_accuracy: 0.9571
Epoch 100/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0886 - recall: 0.9783 - precision: 0.9563 - binary_accuracy: 0.9668 - val_loss: 0.1228 - val_recall: 0.9571 - val_precision: 0.9536 - val_binary_accuracy: 0.9553
Epoch 101/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0886 - recall: 0.9780 - precision: 0.9568 - binary_accuracy: 0.9669 - val_loss: 0.1196 - val_recall: 0.9648 - val_precision: 0.9498 - val_binary_accuracy: 0.9569
Epoch 102/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0869 - recall: 0.9792 - precision: 0.9573 - binary_accuracy: 0.9678 - val_loss: 0.1217 - val_recall: 0.9794 - val_precision: 0.9359 - val_binary_accuracy: 0.9562
Epoch 103/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0877 - recall: 0.9779 - precision: 0.9568 - binary_accuracy: 0.9669 - val_loss: 0.1191 - val_recall: 0.9660 - val_precision: 0.9508 - val_binary_accuracy: 0.9580
Epoch 104/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0873 - recall: 0.9785 - precision: 0.9572 - binary_accuracy: 0.9674 - val_loss: 0.1215 - val_recall: 0.9737 - val_precision: 0.9410 - val_binary_accuracy: 0.9564
Epoch 105/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0861 - recall: 0.9797 - precision: 0.9576 - binary_accuracy: 0.9682 - val_loss: 0.1196 - val_recall: 0.9647 - val_precision: 0.9501 - val_binary_accuracy: 0.9570
Epoch 106/350
49/49 [==============================] - 1s 19ms/step - loss: 0.0869 - recall: 0.9788 - precision: 0.9579 - binary_accuracy: 0.9679 - val_loss: 0.1194 - val_recall: 0.9760 - val_precision: 0.9422 - val_binary_accuracy: 0.9581
Epoch 107/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0868 - recall: 0.9785 - precision: 0.9577 - binary_accuracy: 0.9677 - val_loss: 0.1184 - val_recall: 0.9662 - val_precision: 0.9502 - val_binary_accuracy: 0.9578
Epoch 108/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0857 - recall: 0.9787 - precision: 0.9576 - binary_accuracy: 0.9678 - val_loss: 0.1217 - val_recall: 0.9670 - val_precision: 0.9478 - val_binary_accuracy: 0.9569
Epoch 109/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0844 - recall: 0.9795 - precision: 0.9588 - binary_accuracy: 0.9688 - val_loss: 0.1205 - val_recall: 0.9762 - val_precision: 0.9417 - val_binary_accuracy: 0.9579
Epoch 110/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0865 - recall: 0.9791 - precision: 0.9577 - binary_accuracy: 0.9680 - val_loss: 0.1203 - val_recall: 0.9695 - val_precision: 0.9444 - val_binary_accuracy: 0.9562
Epoch 111/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0849 - recall: 0.9793 - precision: 0.9588 - binary_accuracy: 0.9686 - val_loss: 0.1196 - val_recall: 0.9720 - val_precision: 0.9445 - val_binary_accuracy: 0.9575
Epoch 112/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0855 - recall: 0.9792 - precision: 0.9585 - binary_accuracy: 0.9684 - val_loss: 0.1245 - val_recall: 0.9684 - val_precision: 0.9425 - val_binary_accuracy: 0.9546
Epoch 113/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0850 - recall: 0.9794 - precision: 0.9591 - binary_accuracy: 0.9688 - val_loss: 0.1175 - val_recall: 0.9692 - val_precision: 0.9493 - val_binary_accuracy: 0.9587
Epoch 114/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0829 - recall: 0.9806 - precision: 0.9593 - binary_accuracy: 0.9695 - val_loss: 0.1204 - val_recall: 0.9677 - val_precision: 0.9490 - val_binary_accuracy: 0.9579
Epoch 115/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0831 - recall: 0.9799 - precision: 0.9589 - binary_accuracy: 0.9690 - val_loss: 0.1325 - val_recall: 0.9384 - val_precision: 0.9605 - val_binary_accuracy: 0.9499
Epoch 116/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0842 - recall: 0.9792 - precision: 0.9589 - binary_accuracy: 0.9687 - val_loss: 0.1209 - val_recall: 0.9640 - val_precision: 0.9511 - val_binary_accuracy: 0.9572
Epoch 117/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0826 - recall: 0.9803 - precision: 0.9597 - binary_accuracy: 0.9696 - val_loss: 0.1220 - val_recall: 0.9666 - val_precision: 0.9481 - val_binary_accuracy: 0.9568
Epoch 118/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0852 - recall: 0.9792 - precision: 0.9583 - binary_accuracy: 0.9683 - val_loss: 0.1201 - val_recall: 0.9800 - val_precision: 0.9405 - val_binary_accuracy: 0.9590
Epoch 119/350
49/49 [==============================] - 1s 22ms/step - loss: 0.0829 - recall: 0.9804 - precision: 0.9592 - binary_accuracy: 0.9694 - val_loss: 0.1220 - val_recall: 0.9752 - val_precision: 0.9429 - val_binary_accuracy: 0.9581
Epoch 120/350
49/49 [==============================] - 1s 19ms/step - loss: 0.0822 - recall: 0.9806 - precision: 0.9596 - binary_accuracy: 0.9697 - val_loss: 0.1177 - val_recall: 0.9720 - val_precision: 0.9482 - val_binary_accuracy: 0.9594
Epoch 121/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0828 - recall: 0.9805 - precision: 0.9590 - binary_accuracy: 0.9693 - val_loss: 0.1253 - val_recall: 0.9500 - val_precision: 0.9576 - val_binary_accuracy: 0.9540
Epoch 122/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0816 - recall: 0.9802 - precision: 0.9600 - binary_accuracy: 0.9697 - val_loss: 0.1266 - val_recall: 0.9705 - val_precision: 0.9440 - val_binary_accuracy: 0.9564
Epoch 123/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0807 - recall: 0.9813 - precision: 0.9608 - binary_accuracy: 0.9706 - val_loss: 0.1201 - val_recall: 0.9715 - val_precision: 0.9448 - val_binary_accuracy: 0.9574
Epoch 124/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0824 - recall: 0.9803 - precision: 0.9602 - binary_accuracy: 0.9699 - val_loss: 0.1199 - val_recall: 0.9757 - val_precision: 0.9426 - val_binary_accuracy: 0.9581
Epoch 125/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0810 - recall: 0.9814 - precision: 0.9597 - binary_accuracy: 0.9701 - val_loss: 0.1223 - val_recall: 0.9752 - val_precision: 0.9433 - val_binary_accuracy: 0.9583
Epoch 126/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0794 - recall: 0.9821 - precision: 0.9613 - binary_accuracy: 0.9713 - val_loss: 0.1197 - val_recall: 0.9766 - val_precision: 0.9429 - val_binary_accuracy: 0.9587
Epoch 127/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0809 - recall: 0.9810 - precision: 0.9602 - binary_accuracy: 0.9702 - val_loss: 0.1225 - val_recall: 0.9745 - val_precision: 0.9424 - val_binary_accuracy: 0.9574
Epoch 128/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0807 - recall: 0.9809 - precision: 0.9603 - binary_accuracy: 0.9702 - val_loss: 0.1179 - val_recall: 0.9670 - val_precision: 0.9501 - val_binary_accuracy: 0.9581
Epoch 129/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0802 - recall: 0.9806 - precision: 0.9605 - binary_accuracy: 0.9702 - val_loss: 0.1187 - val_recall: 0.9702 - val_precision: 0.9482 - val_binary_accuracy: 0.9586
Epoch 130/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0813 - recall: 0.9805 - precision: 0.9601 - binary_accuracy: 0.9699 - val_loss: 0.1177 - val_recall: 0.9713 - val_precision: 0.9483 - val_binary_accuracy: 0.9592
Epoch 131/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0781 - recall: 0.9826 - precision: 0.9612 - binary_accuracy: 0.9715 - val_loss: 0.1213 - val_recall: 0.9695 - val_precision: 0.9474 - val_binary_accuracy: 0.9578
Epoch 132/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0779 - recall: 0.9816 - precision: 0.9618 - binary_accuracy: 0.9714 - val_loss: 0.1213 - val_recall: 0.9622 - val_precision: 0.9519 - val_binary_accuracy: 0.9567
Epoch 133/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0800 - recall: 0.9816 - precision: 0.9609 - binary_accuracy: 0.9709 - val_loss: 0.1201 - val_recall: 0.9629 - val_precision: 0.9518 - val_binary_accuracy: 0.9570
Epoch 134/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0787 - recall: 0.9819 - precision: 0.9614 - binary_accuracy: 0.9713 - val_loss: 0.1196 - val_recall: 0.9718 - val_precision: 0.9495 - val_binary_accuracy: 0.9600
Epoch 135/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0787 - recall: 0.9822 - precision: 0.9619 - binary_accuracy: 0.9717 - val_loss: 0.1222 - val_recall: 0.9662 - val_precision: 0.9510 - val_binary_accuracy: 0.9582
Epoch 136/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0769 - recall: 0.9825 - precision: 0.9621 - binary_accuracy: 0.9719 - val_loss: 0.1230 - val_recall: 0.9585 - val_precision: 0.9532 - val_binary_accuracy: 0.9557
Epoch 137/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0777 - recall: 0.9818 - precision: 0.9619 - binary_accuracy: 0.9715 - val_loss: 0.1186 - val_recall: 0.9772 - val_precision: 0.9444 - val_binary_accuracy: 0.9599
Epoch 138/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0779 - recall: 0.9820 - precision: 0.9613 - binary_accuracy: 0.9713 - val_loss: 0.1205 - val_recall: 0.9747 - val_precision: 0.9442 - val_binary_accuracy: 0.9585
Epoch 139/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0771 - recall: 0.9824 - precision: 0.9620 - binary_accuracy: 0.9718 - val_loss: 0.1200 - val_recall: 0.9700 - val_precision: 0.9483 - val_binary_accuracy: 0.9585
Epoch 140/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0757 - recall: 0.9830 - precision: 0.9625 - binary_accuracy: 0.9724 - val_loss: 0.1305 - val_recall: 0.9465 - val_precision: 0.9580 - val_binary_accuracy: 0.9525
Epoch 141/350
49/49 [==============================] - 1s 17ms/step - loss: 0.0777 - recall: 0.9816 - precision: 0.9622 - binary_accuracy: 0.9716 - val_loss: 0.1208 - val_recall: 0.9795 - val_precision: 0.9427 - val_binary_accuracy: 0.9600
Epoch 142/350
49/49 [==============================] - 1s 18ms/step - loss: 0.0751 - recall: 0.9832 - precision: 0.9625 - binary_accuracy: 0.9725 - val_loss: 0.1218 - val_recall: 0.9812 - val_precision: 0.9403 - val_binary_accuracy: 0.9594
Epoch 143/350
49/49 [==============================] - 1s 17ms/step - loss: 0.0756 - recall: 0.9829 - precision: 0.9622 - binary_accuracy: 0.9722 - val_loss: 0.1248 - val_recall: 0.9568 - val_precision: 0.9565 - val_binary_accuracy: 0.9566
Epoch 144/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0764 - recall: 0.9822 - precision: 0.9623 - binary_accuracy: 0.9719 - val_loss: 0.1210 - val_recall: 0.9657 - val_precision: 0.9522 - val_binary_accuracy: 0.9586
Epoch 145/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0773 - recall: 0.9817 - precision: 0.9630 - binary_accuracy: 0.9720 - val_loss: 0.1215 - val_recall: 0.9756 - val_precision: 0.9450 - val_binary_accuracy: 0.9594
Epoch 146/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0741 - recall: 0.9835 - precision: 0.9639 - binary_accuracy: 0.9734 - val_loss: 0.1202 - val_recall: 0.9792 - val_precision: 0.9445 - val_binary_accuracy: 0.9608
Epoch 147/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0747 - recall: 0.9832 - precision: 0.9638 - binary_accuracy: 0.9731 - val_loss: 0.1228 - val_recall: 0.9695 - val_precision: 0.9464 - val_binary_accuracy: 0.9573
Epoch 148/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0731 - recall: 0.9841 - precision: 0.9639 - binary_accuracy: 0.9736 - val_loss: 0.1237 - val_recall: 0.9680 - val_precision: 0.9500 - val_binary_accuracy: 0.9585
Epoch 149/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0775 - recall: 0.9814 - precision: 0.9621 - binary_accuracy: 0.9714 - val_loss: 0.1196 - val_recall: 0.9699 - val_precision: 0.9496 - val_binary_accuracy: 0.9592
Epoch 150/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0758 - recall: 0.9829 - precision: 0.9630 - binary_accuracy: 0.9726 - val_loss: 0.1212 - val_recall: 0.9610 - val_precision: 0.9550 - val_binary_accuracy: 0.9579
Epoch 151/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0838 - recall: 0.9783 - precision: 0.9588 - binary_accuracy: 0.9682 - val_loss: 0.1265 - val_recall: 0.9513 - val_precision: 0.9592 - val_binary_accuracy: 0.9554
Epoch 152/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0743 - recall: 0.9833 - precision: 0.9640 - binary_accuracy: 0.9733 - val_loss: 0.1206 - val_recall: 0.9801 - val_precision: 0.9426 - val_binary_accuracy: 0.9602
Epoch 153/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0735 - recall: 0.9835 - precision: 0.9639 - binary_accuracy: 0.9733 - val_loss: 0.1208 - val_recall: 0.9766 - val_precision: 0.9439 - val_binary_accuracy: 0.9592
Epoch 154/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0730 - recall: 0.9839 - precision: 0.9647 - binary_accuracy: 0.9740 - val_loss: 0.1210 - val_recall: 0.9819 - val_precision: 0.9427 - val_binary_accuracy: 0.9611
Epoch 155/350
49/49 [==============================] - 1s 21ms/step - loss: 0.0723 - recall: 0.9845 - precision: 0.9643 - binary_accuracy: 0.9741 - val_loss: 0.1191 - val_recall: 0.9690 - val_precision: 0.9520 - val_binary_accuracy: 0.9601
Epoch 156/350
49/49 [==============================] - 1s 22ms/step - loss: 0.0716 - recall: 0.9842 - precision: 0.9654 - binary_accuracy: 0.9745 - val_loss: 0.1215 - val_recall: 0.9830 - val_precision: 0.9423 - val_binary_accuracy: 0.9614
Epoch 157/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0722 - recall: 0.9838 - precision: 0.9641 - binary_accuracy: 0.9736 - val_loss: 0.1244 - val_recall: 0.9575 - val_precision: 0.9558 - val_binary_accuracy: 0.9566
Epoch 158/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0739 - recall: 0.9829 - precision: 0.9639 - binary_accuracy: 0.9731 - val_loss: 0.1227 - val_recall: 0.9734 - val_precision: 0.9463 - val_binary_accuracy: 0.9591
Epoch 159/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0719 - recall: 0.9846 - precision: 0.9651 - binary_accuracy: 0.9745 - val_loss: 0.1192 - val_recall: 0.9747 - val_precision: 0.9491 - val_binary_accuracy: 0.9612
Epoch 160/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0751 - recall: 0.9824 - precision: 0.9626 - binary_accuracy: 0.9721 - val_loss: 0.1213 - val_recall: 0.9677 - val_precision: 0.9516 - val_binary_accuracy: 0.9592
Epoch 161/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0713 - recall: 0.9850 - precision: 0.9649 - binary_accuracy: 0.9746 - val_loss: 0.1199 - val_recall: 0.9632 - val_precision: 0.9556 - val_binary_accuracy: 0.9592
Epoch 162/350
49/49 [==============================] - 1s 19ms/step - loss: 0.0741 - recall: 0.9833 - precision: 0.9640 - binary_accuracy: 0.9733 - val_loss: 0.1190 - val_recall: 0.9675 - val_precision: 0.9529 - val_binary_accuracy: 0.9598
Epoch 163/350
49/49 [==============================] - 1s 23ms/step - loss: 0.0734 - recall: 0.9832 - precision: 0.9635 - binary_accuracy: 0.9730 - val_loss: 0.1202 - val_recall: 0.9647 - val_precision: 0.9545 - val_binary_accuracy: 0.9593
Epoch 164/350
49/49 [==============================] - 1s 25ms/step - loss: 0.0700 - recall: 0.9851 - precision: 0.9654 - binary_accuracy: 0.9749 - val_loss: 0.1234 - val_recall: 0.9715 - val_precision: 0.9468 - val_binary_accuracy: 0.9585
Epoch 165/350
49/49 [==============================] - 1s 17ms/step - loss: 0.0737 - recall: 0.9832 - precision: 0.9636 - binary_accuracy: 0.9730 - val_loss: 0.1306 - val_recall: 0.9473 - val_precision: 0.9614 - val_binary_accuracy: 0.9546
Epoch 166/350
49/49 [==============================] - 1s 21ms/step - loss: 0.0739 - recall: 0.9833 - precision: 0.9637 - binary_accuracy: 0.9732 - val_loss: 0.1215 - val_recall: 0.9807 - val_precision: 0.9405 - val_binary_accuracy: 0.9593
Epoch 167/350
49/49 [==============================] - 1s 18ms/step - loss: 0.0701 - recall: 0.9849 - precision: 0.9656 - binary_accuracy: 0.9749 - val_loss: 0.1250 - val_recall: 0.9563 - val_precision: 0.9562 - val_binary_accuracy: 0.9562
Epoch 168/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0710 - recall: 0.9849 - precision: 0.9651 - binary_accuracy: 0.9747 - val_loss: 0.1227 - val_recall: 0.9587 - val_precision: 0.9583 - val_binary_accuracy: 0.9585
Epoch 169/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0701 - recall: 0.9849 - precision: 0.9659 - binary_accuracy: 0.9751 - val_loss: 0.1196 - val_recall: 0.9746 - val_precision: 0.9485 - val_binary_accuracy: 0.9609
Epoch 170/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0685 - recall: 0.9856 - precision: 0.9664 - binary_accuracy: 0.9757 - val_loss: 0.1216 - val_recall: 0.9645 - val_precision: 0.9546 - val_binary_accuracy: 0.9593
Epoch 171/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0709 - recall: 0.9841 - precision: 0.9654 - binary_accuracy: 0.9744 - val_loss: 0.1215 - val_recall: 0.9699 - val_precision: 0.9523 - val_binary_accuracy: 0.9606
Epoch 172/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0683 - recall: 0.9854 - precision: 0.9666 - binary_accuracy: 0.9757 - val_loss: 0.1263 - val_recall: 0.9794 - val_precision: 0.9440 - val_binary_accuracy: 0.9606
Epoch 173/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0723 - recall: 0.9831 - precision: 0.9645 - binary_accuracy: 0.9735 - val_loss: 0.1208 - val_recall: 0.9750 - val_precision: 0.9488 - val_binary_accuracy: 0.9612
Epoch 174/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0694 - recall: 0.9840 - precision: 0.9658 - binary_accuracy: 0.9746 - val_loss: 0.1196 - val_recall: 0.9706 - val_precision: 0.9519 - val_binary_accuracy: 0.9607
Epoch 175/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0681 - recall: 0.9855 - precision: 0.9668 - binary_accuracy: 0.9758 - val_loss: 0.1207 - val_recall: 0.9691 - val_precision: 0.9530 - val_binary_accuracy: 0.9606
Epoch 176/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0693 - recall: 0.9853 - precision: 0.9657 - binary_accuracy: 0.9751 - val_loss: 0.1322 - val_recall: 0.9446 - val_precision: 0.9610 - val_binary_accuracy: 0.9531
Epoch 177/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0699 - recall: 0.9844 - precision: 0.9659 - binary_accuracy: 0.9749 - val_loss: 0.1235 - val_recall: 0.9798 - val_precision: 0.9437 - val_binary_accuracy: 0.9607
Epoch 178/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0684 - recall: 0.9848 - precision: 0.9668 - binary_accuracy: 0.9755 - val_loss: 0.1205 - val_recall: 0.9737 - val_precision: 0.9503 - val_binary_accuracy: 0.9614
Epoch 179/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0688 - recall: 0.9848 - precision: 0.9662 - binary_accuracy: 0.9752 - val_loss: 0.1218 - val_recall: 0.9640 - val_precision: 0.9566 - val_binary_accuracy: 0.9601
Epoch 180/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0684 - recall: 0.9840 - precision: 0.9665 - binary_accuracy: 0.9750 - val_loss: 0.1207 - val_recall: 0.9755 - val_precision: 0.9486 - val_binary_accuracy: 0.9613
Epoch 181/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0690 - recall: 0.9852 - precision: 0.9660 - binary_accuracy: 0.9753 - val_loss: 0.1239 - val_recall: 0.9638 - val_precision: 0.9550 - val_binary_accuracy: 0.9592
Epoch 182/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0690 - recall: 0.9846 - precision: 0.9659 - binary_accuracy: 0.9749 - val_loss: 0.1195 - val_recall: 0.9711 - val_precision: 0.9512 - val_binary_accuracy: 0.9606
Epoch 183/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0669 - recall: 0.9853 - precision: 0.9671 - binary_accuracy: 0.9759 - val_loss: 0.1284 - val_recall: 0.9844 - val_precision: 0.9414 - val_binary_accuracy: 0.9616
Epoch 184/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0668 - recall: 0.9857 - precision: 0.9675 - binary_accuracy: 0.9763 - val_loss: 0.1243 - val_recall: 0.9817 - val_precision: 0.9426 - val_binary_accuracy: 0.9609
Epoch 185/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0664 - recall: 0.9860 - precision: 0.9677 - binary_accuracy: 0.9766 - val_loss: 0.1239 - val_recall: 0.9793 - val_precision: 0.9442 - val_binary_accuracy: 0.9607
Epoch 186/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0694 - recall: 0.9839 - precision: 0.9669 - binary_accuracy: 0.9751 - val_loss: 0.1232 - val_recall: 0.9841 - val_precision: 0.9417 - val_binary_accuracy: 0.9616
Epoch 187/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0659 - recall: 0.9864 - precision: 0.9675 - binary_accuracy: 0.9766 - val_loss: 0.1209 - val_recall: 0.9718 - val_precision: 0.9530 - val_binary_accuracy: 0.9619
Epoch 188/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0698 - recall: 0.9843 - precision: 0.9663 - binary_accuracy: 0.9750 - val_loss: 0.1249 - val_recall: 0.9675 - val_precision: 0.9541 - val_binary_accuracy: 0.9605
Epoch 189/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0660 - recall: 0.9857 - precision: 0.9677 - binary_accuracy: 0.9764 - val_loss: 0.1209 - val_recall: 0.9811 - val_precision: 0.9444 - val_binary_accuracy: 0.9616
Epoch 190/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0672 - recall: 0.9854 - precision: 0.9671 - binary_accuracy: 0.9760 - val_loss: 0.1242 - val_recall: 0.9596 - val_precision: 0.9575 - val_binary_accuracy: 0.9585
Epoch 191/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0667 - recall: 0.9852 - precision: 0.9677 - binary_accuracy: 0.9762 - val_loss: 0.1231 - val_recall: 0.9768 - val_precision: 0.9468 - val_binary_accuracy: 0.9610
Epoch 192/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0657 - recall: 0.9861 - precision: 0.9680 - binary_accuracy: 0.9768 - val_loss: 0.1233 - val_recall: 0.9633 - val_precision: 0.9562 - val_binary_accuracy: 0.9596
Epoch 193/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0663 - recall: 0.9854 - precision: 0.9677 - binary_accuracy: 0.9763 - val_loss: 0.1254 - val_recall: 0.9705 - val_precision: 0.9505 - val_binary_accuracy: 0.9600
Epoch 194/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0645 - recall: 0.9866 - precision: 0.9679 - binary_accuracy: 0.9770 - val_loss: 0.1277 - val_recall: 0.9553 - val_precision: 0.9567 - val_binary_accuracy: 0.9560
Epoch 195/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0674 - recall: 0.9846 - precision: 0.9670 - binary_accuracy: 0.9755 - val_loss: 0.1218 - val_recall: 0.9755 - val_precision: 0.9481 - val_binary_accuracy: 0.9611
Epoch 196/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0659 - recall: 0.9852 - precision: 0.9675 - binary_accuracy: 0.9761 - val_loss: 0.1194 - val_recall: 0.9708 - val_precision: 0.9520 - val_binary_accuracy: 0.9609
Epoch 197/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0654 - recall: 0.9860 - precision: 0.9678 - binary_accuracy: 0.9766 - val_loss: 0.1223 - val_recall: 0.9801 - val_precision: 0.9457 - val_binary_accuracy: 0.9619
Epoch 198/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0640 - recall: 0.9867 - precision: 0.9688 - binary_accuracy: 0.9775 - val_loss: 0.1244 - val_recall: 0.9690 - val_precision: 0.9512 - val_binary_accuracy: 0.9596
Epoch 199/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0659 - recall: 0.9853 - precision: 0.9681 - binary_accuracy: 0.9764 - val_loss: 0.1236 - val_recall: 0.9735 - val_precision: 0.9487 - val_binary_accuracy: 0.9604
Epoch 200/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0641 - recall: 0.9866 - precision: 0.9690 - binary_accuracy: 0.9776 - val_loss: 0.1211 - val_recall: 0.9722 - val_precision: 0.9523 - val_binary_accuracy: 0.9617
Epoch 201/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0638 - recall: 0.9867 - precision: 0.9687 - binary_accuracy: 0.9774 - val_loss: 0.1241 - val_recall: 0.9821 - val_precision: 0.9412 - val_binary_accuracy: 0.9604
Epoch 202/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0650 - recall: 0.9860 - precision: 0.9680 - binary_accuracy: 0.9767 - val_loss: 0.1233 - val_recall: 0.9680 - val_precision: 0.9537 - val_binary_accuracy: 0.9605
Epoch 203/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0648 - recall: 0.9860 - precision: 0.9687 - binary_accuracy: 0.9771 - val_loss: 0.1238 - val_recall: 0.9635 - val_precision: 0.9560 - val_binary_accuracy: 0.9596
Epoch 204/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0640 - recall: 0.9864 - precision: 0.9689 - binary_accuracy: 0.9774 - val_loss: 0.1296 - val_recall: 0.9862 - val_precision: 0.9401 - val_binary_accuracy: 0.9617
Epoch 205/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0685 - recall: 0.9836 - precision: 0.9665 - binary_accuracy: 0.9747 - val_loss: 0.1219 - val_recall: 0.9806 - val_precision: 0.9441 - val_binary_accuracy: 0.9613
Epoch 206/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0646 - recall: 0.9865 - precision: 0.9680 - binary_accuracy: 0.9770 - val_loss: 0.1201 - val_recall: 0.9694 - val_precision: 0.9545 - val_binary_accuracy: 0.9616
Epoch 207/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0639 - recall: 0.9866 - precision: 0.9694 - binary_accuracy: 0.9778 - val_loss: 0.1270 - val_recall: 0.9835 - val_precision: 0.9427 - val_binary_accuracy: 0.9619
Epoch 208/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0636 - recall: 0.9866 - precision: 0.9687 - binary_accuracy: 0.9774 - val_loss: 0.1206 - val_recall: 0.9747 - val_precision: 0.9517 - val_binary_accuracy: 0.9626
Epoch 209/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0610 - recall: 0.9880 - precision: 0.9704 - binary_accuracy: 0.9789 - val_loss: 0.1253 - val_recall: 0.9783 - val_precision: 0.9498 - val_binary_accuracy: 0.9633
Epoch 210/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0623 - recall: 0.9866 - precision: 0.9693 - binary_accuracy: 0.9777 - val_loss: 0.1255 - val_recall: 0.9810 - val_precision: 0.9473 - val_binary_accuracy: 0.9632
Epoch 211/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0623 - recall: 0.9866 - precision: 0.9699 - binary_accuracy: 0.9780 - val_loss: 0.1244 - val_recall: 0.9743 - val_precision: 0.9496 - val_binary_accuracy: 0.9613
Epoch 212/350
49/49 [==============================] - 1s 19ms/step - loss: 0.0625 - recall: 0.9866 - precision: 0.9694 - binary_accuracy: 0.9778 - val_loss: 0.1262 - val_recall: 0.9716 - val_precision: 0.9525 - val_binary_accuracy: 0.9616
Epoch 213/350
49/49 [==============================] - 1s 19ms/step - loss: 0.0629 - recall: 0.9860 - precision: 0.9693 - binary_accuracy: 0.9774 - val_loss: 0.1242 - val_recall: 0.9746 - val_precision: 0.9514 - val_binary_accuracy: 0.9624
Epoch 214/350
49/49 [==============================] - 1s 17ms/step - loss: 0.0617 - recall: 0.9866 - precision: 0.9700 - binary_accuracy: 0.9781 - val_loss: 0.1253 - val_recall: 0.9824 - val_precision: 0.9458 - val_binary_accuracy: 0.9631
Epoch 215/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0613 - recall: 0.9874 - precision: 0.9701 - binary_accuracy: 0.9785 - val_loss: 0.1263 - val_recall: 0.9740 - val_precision: 0.9523 - val_binary_accuracy: 0.9626
Epoch 216/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0612 - recall: 0.9870 - precision: 0.9697 - binary_accuracy: 0.9781 - val_loss: 0.1227 - val_recall: 0.9819 - val_precision: 0.9464 - val_binary_accuracy: 0.9631
Epoch 217/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0608 - recall: 0.9875 - precision: 0.9701 - binary_accuracy: 0.9786 - val_loss: 0.1253 - val_recall: 0.9737 - val_precision: 0.9515 - val_binary_accuracy: 0.9621
Epoch 218/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0620 - recall: 0.9856 - precision: 0.9701 - binary_accuracy: 0.9776 - val_loss: 0.1299 - val_recall: 0.9585 - val_precision: 0.9571 - val_binary_accuracy: 0.9578
Epoch 219/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0682 - recall: 0.9846 - precision: 0.9664 - binary_accuracy: 0.9752 - val_loss: 0.1250 - val_recall: 0.9789 - val_precision: 0.9459 - val_binary_accuracy: 0.9615
Epoch 220/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0640 - recall: 0.9854 - precision: 0.9690 - binary_accuracy: 0.9770 - val_loss: 0.1280 - val_recall: 0.9826 - val_precision: 0.9424 - val_binary_accuracy: 0.9613
Epoch 221/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0606 - recall: 0.9875 - precision: 0.9705 - binary_accuracy: 0.9788 - val_loss: 0.1228 - val_recall: 0.9802 - val_precision: 0.9473 - val_binary_accuracy: 0.9629
Epoch 222/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0605 - recall: 0.9875 - precision: 0.9703 - binary_accuracy: 0.9787 - val_loss: 0.1280 - val_recall: 0.9826 - val_precision: 0.9435 - val_binary_accuracy: 0.9619
Epoch 223/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0606 - recall: 0.9876 - precision: 0.9711 - binary_accuracy: 0.9791 - val_loss: 0.1278 - val_recall: 0.9812 - val_precision: 0.9464 - val_binary_accuracy: 0.9628
Epoch 224/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0595 - recall: 0.9879 - precision: 0.9709 - binary_accuracy: 0.9791 - val_loss: 0.1298 - val_recall: 0.9578 - val_precision: 0.9593 - val_binary_accuracy: 0.9586
Epoch 225/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0601 - recall: 0.9873 - precision: 0.9704 - binary_accuracy: 0.9786 - val_loss: 0.1243 - val_recall: 0.9692 - val_precision: 0.9555 - val_binary_accuracy: 0.9621
Epoch 226/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0607 - recall: 0.9870 - precision: 0.9707 - binary_accuracy: 0.9787 - val_loss: 0.1254 - val_recall: 0.9645 - val_precision: 0.9563 - val_binary_accuracy: 0.9602
Epoch 227/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0586 - recall: 0.9878 - precision: 0.9707 - binary_accuracy: 0.9791 - val_loss: 0.1237 - val_recall: 0.9755 - val_precision: 0.9498 - val_binary_accuracy: 0.9619
Epoch 228/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0593 - recall: 0.9875 - precision: 0.9711 - binary_accuracy: 0.9791 - val_loss: 0.1280 - val_recall: 0.9772 - val_precision: 0.9468 - val_binary_accuracy: 0.9611
Epoch 229/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0627 - recall: 0.9862 - precision: 0.9695 - binary_accuracy: 0.9776 - val_loss: 0.1397 - val_recall: 0.9448 - val_precision: 0.9611 - val_binary_accuracy: 0.9533
Epoch 230/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0631 - recall: 0.9860 - precision: 0.9691 - binary_accuracy: 0.9773 - val_loss: 0.1237 - val_recall: 0.9732 - val_precision: 0.9532 - val_binary_accuracy: 0.9627
Epoch 231/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0581 - recall: 0.9878 - precision: 0.9717 - binary_accuracy: 0.9796 - val_loss: 0.1270 - val_recall: 0.9796 - val_precision: 0.9466 - val_binary_accuracy: 0.9621
Epoch 232/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0582 - recall: 0.9883 - precision: 0.9715 - binary_accuracy: 0.9796 - val_loss: 0.1260 - val_recall: 0.9777 - val_precision: 0.9499 - val_binary_accuracy: 0.9631
Epoch 233/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0586 - recall: 0.9880 - precision: 0.9717 - binary_accuracy: 0.9796 - val_loss: 0.1245 - val_recall: 0.9671 - val_precision: 0.9560 - val_binary_accuracy: 0.9613
Epoch 234/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0593 - recall: 0.9873 - precision: 0.9713 - binary_accuracy: 0.9791 - val_loss: 0.1250 - val_recall: 0.9747 - val_precision: 0.9532 - val_binary_accuracy: 0.9634
Epoch 235/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0573 - recall: 0.9888 - precision: 0.9724 - binary_accuracy: 0.9804 - val_loss: 0.1248 - val_recall: 0.9714 - val_precision: 0.9524 - val_binary_accuracy: 0.9614
Epoch 236/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0569 - recall: 0.9885 - precision: 0.9727 - binary_accuracy: 0.9804 - val_loss: 0.1288 - val_recall: 0.9827 - val_precision: 0.9444 - val_binary_accuracy: 0.9624
Epoch 237/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0584 - recall: 0.9882 - precision: 0.9710 - binary_accuracy: 0.9794 - val_loss: 0.1452 - val_recall: 0.9472 - val_precision: 0.9611 - val_binary_accuracy: 0.9544
Epoch 238/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0603 - recall: 0.9869 - precision: 0.9709 - binary_accuracy: 0.9787 - val_loss: 0.1269 - val_recall: 0.9752 - val_precision: 0.9521 - val_binary_accuracy: 0.9631
Epoch 239/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0574 - recall: 0.9880 - precision: 0.9722 - binary_accuracy: 0.9799 - val_loss: 0.1306 - val_recall: 0.9808 - val_precision: 0.9465 - val_binary_accuracy: 0.9627
Epoch 240/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0605 - recall: 0.9869 - precision: 0.9698 - binary_accuracy: 0.9781 - val_loss: 0.1346 - val_recall: 0.9555 - val_precision: 0.9603 - val_binary_accuracy: 0.9580
Epoch 241/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0588 - recall: 0.9871 - precision: 0.9722 - binary_accuracy: 0.9794 - val_loss: 0.1284 - val_recall: 0.9812 - val_precision: 0.9469 - val_binary_accuracy: 0.9631
Epoch 242/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0598 - recall: 0.9873 - precision: 0.9707 - binary_accuracy: 0.9788 - val_loss: 0.1254 - val_recall: 0.9762 - val_precision: 0.9516 - val_binary_accuracy: 0.9633
Epoch 243/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0556 - recall: 0.9891 - precision: 0.9726 - binary_accuracy: 0.9806 - val_loss: 0.1249 - val_recall: 0.9720 - val_precision: 0.9558 - val_binary_accuracy: 0.9635
Epoch 244/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0558 - recall: 0.9881 - precision: 0.9730 - binary_accuracy: 0.9804 - val_loss: 0.1260 - val_recall: 0.9809 - val_precision: 0.9478 - val_binary_accuracy: 0.9634
Epoch 245/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0555 - recall: 0.9887 - precision: 0.9730 - binary_accuracy: 0.9807 - val_loss: 0.1311 - val_recall: 0.9678 - val_precision: 0.9513 - val_binary_accuracy: 0.9591
Epoch 246/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0562 - recall: 0.9884 - precision: 0.9730 - binary_accuracy: 0.9805 - val_loss: 0.1375 - val_recall: 0.9878 - val_precision: 0.9374 - val_binary_accuracy: 0.9609
Epoch 247/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0559 - recall: 0.9883 - precision: 0.9726 - binary_accuracy: 0.9803 - val_loss: 0.1280 - val_recall: 0.9748 - val_precision: 0.9504 - val_binary_accuracy: 0.9620
Epoch 248/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0553 - recall: 0.9890 - precision: 0.9734 - binary_accuracy: 0.9810 - val_loss: 0.1304 - val_recall: 0.9794 - val_precision: 0.9494 - val_binary_accuracy: 0.9636
Epoch 249/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0555 - recall: 0.9882 - precision: 0.9731 - binary_accuracy: 0.9805 - val_loss: 0.1306 - val_recall: 0.9675 - val_precision: 0.9539 - val_binary_accuracy: 0.9604
Epoch 250/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0562 - recall: 0.9883 - precision: 0.9724 - binary_accuracy: 0.9801 - val_loss: 0.1360 - val_recall: 0.9620 - val_precision: 0.9567 - val_binary_accuracy: 0.9592
Epoch 251/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0586 - recall: 0.9867 - precision: 0.9716 - binary_accuracy: 0.9789 - val_loss: 0.1317 - val_recall: 0.9705 - val_precision: 0.9547 - val_binary_accuracy: 0.9622
Epoch 252/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0556 - recall: 0.9890 - precision: 0.9734 - binary_accuracy: 0.9810 - val_loss: 0.1353 - val_recall: 0.9773 - val_precision: 0.9454 - val_binary_accuracy: 0.9604
Epoch 253/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0548 - recall: 0.9889 - precision: 0.9735 - binary_accuracy: 0.9810 - val_loss: 0.1317 - val_recall: 0.9768 - val_precision: 0.9510 - val_binary_accuracy: 0.9632
Epoch 254/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0559 - recall: 0.9881 - precision: 0.9730 - binary_accuracy: 0.9804 - val_loss: 0.1316 - val_recall: 0.9812 - val_precision: 0.9468 - val_binary_accuracy: 0.9631
Epoch 255/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0599 - recall: 0.9863 - precision: 0.9705 - binary_accuracy: 0.9782 - val_loss: 0.1298 - val_recall: 0.9734 - val_precision: 0.9525 - val_binary_accuracy: 0.9624
Epoch 256/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0558 - recall: 0.9886 - precision: 0.9727 - binary_accuracy: 0.9804 - val_loss: 0.1309 - val_recall: 0.9733 - val_precision: 0.9539 - val_binary_accuracy: 0.9631
Epoch 257/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0542 - recall: 0.9891 - precision: 0.9737 - binary_accuracy: 0.9812 - val_loss: 0.1281 - val_recall: 0.9811 - val_precision: 0.9470 - val_binary_accuracy: 0.9631
Epoch 258/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0553 - recall: 0.9884 - precision: 0.9727 - binary_accuracy: 0.9803 - val_loss: 0.1309 - val_recall: 0.9755 - val_precision: 0.9523 - val_binary_accuracy: 0.9633
Epoch 259/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0561 - recall: 0.9883 - precision: 0.9724 - binary_accuracy: 0.9802 - val_loss: 0.1306 - val_recall: 0.9687 - val_precision: 0.9548 - val_binary_accuracy: 0.9614
Epoch 260/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0539 - recall: 0.9889 - precision: 0.9737 - binary_accuracy: 0.9811 - val_loss: 0.1386 - val_recall: 0.9542 - val_precision: 0.9587 - val_binary_accuracy: 0.9565
Epoch 261/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0552 - recall: 0.9883 - precision: 0.9732 - binary_accuracy: 0.9806 - val_loss: 0.1324 - val_recall: 0.9792 - val_precision: 0.9475 - val_binary_accuracy: 0.9624
Epoch 262/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0556 - recall: 0.9886 - precision: 0.9730 - binary_accuracy: 0.9806 - val_loss: 0.1403 - val_recall: 0.9839 - val_precision: 0.9415 - val_binary_accuracy: 0.9614
Epoch 263/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0551 - recall: 0.9887 - precision: 0.9734 - binary_accuracy: 0.9808 - val_loss: 0.1291 - val_recall: 0.9714 - val_precision: 0.9532 - val_binary_accuracy: 0.9618
Epoch 264/350
49/49 [==============================] - 1s 17ms/step - loss: 0.0546 - recall: 0.9883 - precision: 0.9735 - binary_accuracy: 0.9807 - val_loss: 0.1323 - val_recall: 0.9789 - val_precision: 0.9486 - val_binary_accuracy: 0.9629
Epoch 265/350
49/49 [==============================] - 1s 18ms/step - loss: 0.0517 - recall: 0.9900 - precision: 0.9750 - binary_accuracy: 0.9823 - val_loss: 0.1340 - val_recall: 0.9815 - val_precision: 0.9463 - val_binary_accuracy: 0.9629
Epoch 266/350
49/49 [==============================] - 1s 18ms/step - loss: 0.0536 - recall: 0.9886 - precision: 0.9740 - binary_accuracy: 0.9811 - val_loss: 0.1299 - val_recall: 0.9704 - val_precision: 0.9546 - val_binary_accuracy: 0.9621
Epoch 267/350
49/49 [==============================] - 1s 24ms/step - loss: 0.0519 - recall: 0.9895 - precision: 0.9745 - binary_accuracy: 0.9819 - val_loss: 0.1340 - val_recall: 0.9779 - val_precision: 0.9501 - val_binary_accuracy: 0.9632
Epoch 268/350
49/49 [==============================] - 1s 20ms/step - loss: 0.0539 - recall: 0.9886 - precision: 0.9739 - binary_accuracy: 0.9811 - val_loss: 0.1338 - val_recall: 0.9762 - val_precision: 0.9487 - val_binary_accuracy: 0.9617
Epoch 269/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0527 - recall: 0.9898 - precision: 0.9748 - binary_accuracy: 0.9821 - val_loss: 0.1376 - val_recall: 0.9603 - val_precision: 0.9587 - val_binary_accuracy: 0.9595
Epoch 270/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0525 - recall: 0.9890 - precision: 0.9745 - binary_accuracy: 0.9816 - val_loss: 0.1291 - val_recall: 0.9745 - val_precision: 0.9530 - val_binary_accuracy: 0.9632
Epoch 271/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0532 - recall: 0.9886 - precision: 0.9741 - binary_accuracy: 0.9812 - val_loss: 0.1382 - val_recall: 0.9834 - val_precision: 0.9459 - val_binary_accuracy: 0.9636
Epoch 272/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0534 - recall: 0.9890 - precision: 0.9739 - binary_accuracy: 0.9813 - val_loss: 0.1313 - val_recall: 0.9677 - val_precision: 0.9582 - val_binary_accuracy: 0.9627
Epoch 273/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0539 - recall: 0.9880 - precision: 0.9742 - binary_accuracy: 0.9809 - val_loss: 0.1412 - val_recall: 0.9777 - val_precision: 0.9476 - val_binary_accuracy: 0.9618
Epoch 274/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0532 - recall: 0.9887 - precision: 0.9741 - binary_accuracy: 0.9812 - val_loss: 0.1371 - val_recall: 0.9787 - val_precision: 0.9463 - val_binary_accuracy: 0.9616
Epoch 275/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0526 - recall: 0.9888 - precision: 0.9743 - binary_accuracy: 0.9814 - val_loss: 0.1428 - val_recall: 0.9869 - val_precision: 0.9385 - val_binary_accuracy: 0.9611
Epoch 276/350
49/49 [==============================] - 1s 17ms/step - loss: 0.0523 - recall: 0.9894 - precision: 0.9744 - binary_accuracy: 0.9817 - val_loss: 0.1374 - val_recall: 0.9559 - val_precision: 0.9611 - val_binary_accuracy: 0.9586
Epoch 277/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0548 - recall: 0.9882 - precision: 0.9737 - binary_accuracy: 0.9808 - val_loss: 0.1317 - val_recall: 0.9800 - val_precision: 0.9497 - val_binary_accuracy: 0.9640
Epoch 278/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0553 - recall: 0.9877 - precision: 0.9730 - binary_accuracy: 0.9801 - val_loss: 0.1433 - val_recall: 0.9879 - val_precision: 0.9384 - val_binary_accuracy: 0.9615
Epoch 279/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0503 - recall: 0.9905 - precision: 0.9754 - binary_accuracy: 0.9827 - val_loss: 0.1321 - val_recall: 0.9720 - val_precision: 0.9531 - val_binary_accuracy: 0.9621
Epoch 280/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0515 - recall: 0.9894 - precision: 0.9752 - binary_accuracy: 0.9821 - val_loss: 0.1359 - val_recall: 0.9791 - val_precision: 0.9476 - val_binary_accuracy: 0.9625
Epoch 281/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0507 - recall: 0.9900 - precision: 0.9758 - binary_accuracy: 0.9827 - val_loss: 0.1326 - val_recall: 0.9642 - val_precision: 0.9568 - val_binary_accuracy: 0.9604
Epoch 282/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0516 - recall: 0.9893 - precision: 0.9754 - binary_accuracy: 0.9822 - val_loss: 0.1352 - val_recall: 0.9693 - val_precision: 0.9550 - val_binary_accuracy: 0.9618
Epoch 283/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0517 - recall: 0.9890 - precision: 0.9752 - binary_accuracy: 0.9819 - val_loss: 0.1388 - val_recall: 0.9742 - val_precision: 0.9479 - val_binary_accuracy: 0.9603
Epoch 284/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0500 - recall: 0.9903 - precision: 0.9755 - binary_accuracy: 0.9827 - val_loss: 0.1439 - val_recall: 0.9624 - val_precision: 0.9534 - val_binary_accuracy: 0.9577
Epoch 285/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0490 - recall: 0.9904 - precision: 0.9762 - binary_accuracy: 0.9831 - val_loss: 0.1358 - val_recall: 0.9653 - val_precision: 0.9577 - val_binary_accuracy: 0.9613
Epoch 286/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0490 - recall: 0.9910 - precision: 0.9760 - binary_accuracy: 0.9833 - val_loss: 0.1390 - val_recall: 0.9727 - val_precision: 0.9534 - val_binary_accuracy: 0.9626
Epoch 287/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0539 - recall: 0.9880 - precision: 0.9748 - binary_accuracy: 0.9813 - val_loss: 0.1466 - val_recall: 0.9800 - val_precision: 0.9427 - val_binary_accuracy: 0.9602
Epoch 288/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0513 - recall: 0.9897 - precision: 0.9751 - binary_accuracy: 0.9822 - val_loss: 0.1386 - val_recall: 0.9593 - val_precision: 0.9591 - val_binary_accuracy: 0.9592
Epoch 289/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0494 - recall: 0.9894 - precision: 0.9764 - binary_accuracy: 0.9828 - val_loss: 0.1413 - val_recall: 0.9842 - val_precision: 0.9427 - val_binary_accuracy: 0.9622
Epoch 290/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0512 - recall: 0.9894 - precision: 0.9754 - binary_accuracy: 0.9822 - val_loss: 0.1377 - val_recall: 0.9753 - val_precision: 0.9521 - val_binary_accuracy: 0.9631
Epoch 291/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0507 - recall: 0.9899 - precision: 0.9759 - binary_accuracy: 0.9827 - val_loss: 0.1342 - val_recall: 0.9726 - val_precision: 0.9528 - val_binary_accuracy: 0.9622
Epoch 292/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0510 - recall: 0.9894 - precision: 0.9749 - binary_accuracy: 0.9820 - val_loss: 0.1428 - val_recall: 0.9548 - val_precision: 0.9610 - val_binary_accuracy: 0.9580
Epoch 293/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0516 - recall: 0.9888 - precision: 0.9750 - binary_accuracy: 0.9817 - val_loss: 0.1333 - val_recall: 0.9729 - val_precision: 0.9538 - val_binary_accuracy: 0.9629
Epoch 294/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0494 - recall: 0.9899 - precision: 0.9760 - binary_accuracy: 0.9828 - val_loss: 0.1422 - val_recall: 0.9625 - val_precision: 0.9563 - val_binary_accuracy: 0.9592
Epoch 295/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0519 - recall: 0.9885 - precision: 0.9752 - binary_accuracy: 0.9817 - val_loss: 0.1415 - val_recall: 0.9820 - val_precision: 0.9463 - val_binary_accuracy: 0.9631
Epoch 296/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0503 - recall: 0.9897 - precision: 0.9761 - binary_accuracy: 0.9827 - val_loss: 0.1421 - val_recall: 0.9839 - val_precision: 0.9427 - val_binary_accuracy: 0.9620
Epoch 297/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0475 - recall: 0.9905 - precision: 0.9771 - binary_accuracy: 0.9837 - val_loss: 0.1450 - val_recall: 0.9859 - val_precision: 0.9422 - val_binary_accuracy: 0.9627
Epoch 298/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0492 - recall: 0.9900 - precision: 0.9760 - binary_accuracy: 0.9828 - val_loss: 0.1381 - val_recall: 0.9678 - val_precision: 0.9544 - val_binary_accuracy: 0.9608
Epoch 299/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0494 - recall: 0.9892 - precision: 0.9769 - binary_accuracy: 0.9829 - val_loss: 0.1426 - val_recall: 0.9823 - val_precision: 0.9467 - val_binary_accuracy: 0.9635
Epoch 300/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0491 - recall: 0.9897 - precision: 0.9760 - binary_accuracy: 0.9827 - val_loss: 0.1392 - val_recall: 0.9783 - val_precision: 0.9498 - val_binary_accuracy: 0.9633
Epoch 301/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0517 - recall: 0.9890 - precision: 0.9746 - binary_accuracy: 0.9816 - val_loss: 0.1397 - val_recall: 0.9817 - val_precision: 0.9454 - val_binary_accuracy: 0.9625
Epoch 302/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0471 - recall: 0.9913 - precision: 0.9768 - binary_accuracy: 0.9839 - val_loss: 0.1372 - val_recall: 0.9682 - val_precision: 0.9546 - val_binary_accuracy: 0.9611
Epoch 303/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0487 - recall: 0.9893 - precision: 0.9767 - binary_accuracy: 0.9829 - val_loss: 0.1366 - val_recall: 0.9723 - val_precision: 0.9522 - val_binary_accuracy: 0.9617
Epoch 304/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0483 - recall: 0.9899 - precision: 0.9770 - binary_accuracy: 0.9833 - val_loss: 0.1427 - val_recall: 0.9808 - val_precision: 0.9471 - val_binary_accuracy: 0.9630
Epoch 305/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0492 - recall: 0.9899 - precision: 0.9764 - binary_accuracy: 0.9830 - val_loss: 0.1347 - val_recall: 0.9738 - val_precision: 0.9512 - val_binary_accuracy: 0.9619
Epoch 306/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0525 - recall: 0.9882 - precision: 0.9745 - binary_accuracy: 0.9812 - val_loss: 0.1379 - val_recall: 0.9727 - val_precision: 0.9483 - val_binary_accuracy: 0.9598
Epoch 307/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0463 - recall: 0.9911 - precision: 0.9779 - binary_accuracy: 0.9844 - val_loss: 0.1381 - val_recall: 0.9818 - val_precision: 0.9469 - val_binary_accuracy: 0.9634
Epoch 308/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0475 - recall: 0.9906 - precision: 0.9772 - binary_accuracy: 0.9837 - val_loss: 0.1406 - val_recall: 0.9812 - val_precision: 0.9479 - val_binary_accuracy: 0.9636
Epoch 309/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0484 - recall: 0.9897 - precision: 0.9764 - binary_accuracy: 0.9829 - val_loss: 0.1408 - val_recall: 0.9690 - val_precision: 0.9521 - val_binary_accuracy: 0.9601
Epoch 310/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0470 - recall: 0.9905 - precision: 0.9771 - binary_accuracy: 0.9836 - val_loss: 0.1398 - val_recall: 0.9778 - val_precision: 0.9503 - val_binary_accuracy: 0.9634
Epoch 311/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0452 - recall: 0.9916 - precision: 0.9786 - binary_accuracy: 0.9850 - val_loss: 0.1442 - val_recall: 0.9592 - val_precision: 0.9599 - val_binary_accuracy: 0.9595
Epoch 312/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0496 - recall: 0.9891 - precision: 0.9768 - binary_accuracy: 0.9828 - val_loss: 0.1525 - val_recall: 0.9865 - val_precision: 0.9397 - val_binary_accuracy: 0.9616
Epoch 313/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0493 - recall: 0.9898 - precision: 0.9762 - binary_accuracy: 0.9829 - val_loss: 0.1396 - val_recall: 0.9786 - val_precision: 0.9490 - val_binary_accuracy: 0.9630
Epoch 314/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0478 - recall: 0.9897 - precision: 0.9767 - binary_accuracy: 0.9830 - val_loss: 0.1393 - val_recall: 0.9794 - val_precision: 0.9494 - val_binary_accuracy: 0.9636
Epoch 315/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0504 - recall: 0.9880 - precision: 0.9756 - binary_accuracy: 0.9817 - val_loss: 0.1428 - val_recall: 0.9822 - val_precision: 0.9473 - val_binary_accuracy: 0.9638
Epoch 316/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0456 - recall: 0.9908 - precision: 0.9785 - binary_accuracy: 0.9845 - val_loss: 0.1477 - val_recall: 0.9850 - val_precision: 0.9397 - val_binary_accuracy: 0.9609
Epoch 317/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0485 - recall: 0.9902 - precision: 0.9766 - binary_accuracy: 0.9833 - val_loss: 0.1441 - val_recall: 0.9625 - val_precision: 0.9580 - val_binary_accuracy: 0.9601
Epoch 318/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0457 - recall: 0.9913 - precision: 0.9784 - binary_accuracy: 0.9847 - val_loss: 0.1408 - val_recall: 0.9699 - val_precision: 0.9560 - val_binary_accuracy: 0.9626
Epoch 319/350
49/49 [==============================] - 1s 18ms/step - loss: 0.0439 - recall: 0.9919 - precision: 0.9790 - binary_accuracy: 0.9854 - val_loss: 0.1411 - val_recall: 0.9817 - val_precision: 0.9467 - val_binary_accuracy: 0.9632
Epoch 320/350
49/49 [==============================] - 1s 20ms/step - loss: 0.0456 - recall: 0.9907 - precision: 0.9778 - binary_accuracy: 0.9841 - val_loss: 0.1514 - val_recall: 0.9508 - val_precision: 0.9601 - val_binary_accuracy: 0.9556
Epoch 321/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0469 - recall: 0.9902 - precision: 0.9775 - binary_accuracy: 0.9837 - val_loss: 0.1393 - val_recall: 0.9764 - val_precision: 0.9534 - val_binary_accuracy: 0.9643
Epoch 322/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0472 - recall: 0.9897 - precision: 0.9774 - binary_accuracy: 0.9834 - val_loss: 0.1419 - val_recall: 0.9769 - val_precision: 0.9485 - val_binary_accuracy: 0.9619
Epoch 323/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0461 - recall: 0.9906 - precision: 0.9777 - binary_accuracy: 0.9840 - val_loss: 0.1418 - val_recall: 0.9667 - val_precision: 0.9572 - val_binary_accuracy: 0.9617
Epoch 324/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0454 - recall: 0.9907 - precision: 0.9783 - binary_accuracy: 0.9844 - val_loss: 0.1417 - val_recall: 0.9785 - val_precision: 0.9497 - val_binary_accuracy: 0.9633
Epoch 325/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0454 - recall: 0.9907 - precision: 0.9779 - binary_accuracy: 0.9841 - val_loss: 0.1589 - val_recall: 0.9877 - val_precision: 0.9367 - val_binary_accuracy: 0.9604
Epoch 326/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0502 - recall: 0.9887 - precision: 0.9758 - binary_accuracy: 0.9821 - val_loss: 0.1490 - val_recall: 0.9834 - val_precision: 0.9438 - val_binary_accuracy: 0.9625
Epoch 327/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0469 - recall: 0.9905 - precision: 0.9769 - binary_accuracy: 0.9835 - val_loss: 0.1417 - val_recall: 0.9794 - val_precision: 0.9490 - val_binary_accuracy: 0.9634
Epoch 328/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0477 - recall: 0.9895 - precision: 0.9772 - binary_accuracy: 0.9832 - val_loss: 0.1455 - val_recall: 0.9834 - val_precision: 0.9438 - val_binary_accuracy: 0.9624
Epoch 329/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0483 - recall: 0.9899 - precision: 0.9763 - binary_accuracy: 0.9829 - val_loss: 0.1483 - val_recall: 0.9836 - val_precision: 0.9459 - val_binary_accuracy: 0.9636
Epoch 330/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0450 - recall: 0.9911 - precision: 0.9783 - binary_accuracy: 0.9845 - val_loss: 0.1438 - val_recall: 0.9716 - val_precision: 0.9524 - val_binary_accuracy: 0.9615
Epoch 331/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0439 - recall: 0.9917 - precision: 0.9791 - binary_accuracy: 0.9853 - val_loss: 0.1415 - val_recall: 0.9725 - val_precision: 0.9525 - val_binary_accuracy: 0.9620
Epoch 332/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0445 - recall: 0.9911 - precision: 0.9792 - binary_accuracy: 0.9850 - val_loss: 0.1420 - val_recall: 0.9787 - val_precision: 0.9507 - val_binary_accuracy: 0.9640
Epoch 333/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0432 - recall: 0.9918 - precision: 0.9792 - binary_accuracy: 0.9854 - val_loss: 0.1473 - val_recall: 0.9801 - val_precision: 0.9472 - val_binary_accuracy: 0.9627
Epoch 334/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0435 - recall: 0.9915 - precision: 0.9788 - binary_accuracy: 0.9851 - val_loss: 0.1411 - val_recall: 0.9677 - val_precision: 0.9570 - val_binary_accuracy: 0.9621
Epoch 335/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0451 - recall: 0.9907 - precision: 0.9783 - binary_accuracy: 0.9844 - val_loss: 0.1527 - val_recall: 0.9785 - val_precision: 0.9449 - val_binary_accuracy: 0.9607
Epoch 336/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0443 - recall: 0.9913 - precision: 0.9791 - binary_accuracy: 0.9851 - val_loss: 0.1482 - val_recall: 0.9817 - val_precision: 0.9471 - val_binary_accuracy: 0.9634
Epoch 337/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0459 - recall: 0.9902 - precision: 0.9777 - binary_accuracy: 0.9838 - val_loss: 0.1462 - val_recall: 0.9834 - val_precision: 0.9452 - val_binary_accuracy: 0.9632
Epoch 338/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0452 - recall: 0.9910 - precision: 0.9779 - binary_accuracy: 0.9843 - val_loss: 0.1429 - val_recall: 0.9699 - val_precision: 0.9554 - val_binary_accuracy: 0.9623
Epoch 339/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0434 - recall: 0.9914 - precision: 0.9794 - binary_accuracy: 0.9853 - val_loss: 0.1491 - val_recall: 0.9827 - val_precision: 0.9474 - val_binary_accuracy: 0.9640
Epoch 340/350
49/49 [==============================] - 1s 13ms/step - loss: 0.0439 - recall: 0.9911 - precision: 0.9788 - binary_accuracy: 0.9848 - val_loss: 0.1478 - val_recall: 0.9797 - val_precision: 0.9472 - val_binary_accuracy: 0.9625
Epoch 341/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0440 - recall: 0.9906 - precision: 0.9790 - binary_accuracy: 0.9847 - val_loss: 0.1424 - val_recall: 0.9734 - val_precision: 0.9519 - val_binary_accuracy: 0.9621
Epoch 342/350
49/49 [==============================] - 1s 17ms/step - loss: 0.0429 - recall: 0.9918 - precision: 0.9791 - binary_accuracy: 0.9853 - val_loss: 0.1439 - val_recall: 0.9772 - val_precision: 0.9524 - val_binary_accuracy: 0.9642
Epoch 343/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0406 - recall: 0.9926 - precision: 0.9804 - binary_accuracy: 0.9864 - val_loss: 0.1415 - val_recall: 0.9742 - val_precision: 0.9526 - val_binary_accuracy: 0.9629
Epoch 344/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0410 - recall: 0.9927 - precision: 0.9804 - binary_accuracy: 0.9865 - val_loss: 0.1483 - val_recall: 0.9756 - val_precision: 0.9529 - val_binary_accuracy: 0.9637
Epoch 345/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0425 - recall: 0.9920 - precision: 0.9794 - binary_accuracy: 0.9856 - val_loss: 0.1480 - val_recall: 0.9776 - val_precision: 0.9474 - val_binary_accuracy: 0.9617
Epoch 346/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0404 - recall: 0.9927 - precision: 0.9805 - binary_accuracy: 0.9865 - val_loss: 0.1442 - val_recall: 0.9784 - val_precision: 0.9516 - val_binary_accuracy: 0.9643
Epoch 347/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0430 - recall: 0.9907 - precision: 0.9792 - binary_accuracy: 0.9849 - val_loss: 0.1506 - val_recall: 0.9795 - val_precision: 0.9484 - val_binary_accuracy: 0.9631
Epoch 348/350
49/49 [==============================] - 1s 15ms/step - loss: 0.0430 - recall: 0.9916 - precision: 0.9796 - binary_accuracy: 0.9855 - val_loss: 0.1552 - val_recall: 0.9689 - val_precision: 0.9500 - val_binary_accuracy: 0.9590
Epoch 349/350
49/49 [==============================] - 1s 16ms/step - loss: 0.0457 - recall: 0.9899 - precision: 0.9783 - binary_accuracy: 0.9840 - val_loss: 0.1509 - val_recall: 0.9754 - val_precision: 0.9502 - val_binary_accuracy: 0.9621
Epoch 350/350
49/49 [==============================] - 1s 14ms/step - loss: 0.0422 - recall: 0.9919 - precision: 0.9793 - binary_accuracy: 0.9855 - val_loss: 0.1422 - val_recall: 0.9737 - val_precision: 0.9547 - val_binary_accuracy: 0.9637
443/443 [==============================] - 1s 3ms/step - loss: 0.1476 - recall: 0.9739 - precision: 0.9519 - binary_accuracy: 0.9621
Test recall:  0.9739057421684265
443/443 [==============================] - 1s 2ms/step

Ciò che è stato fatto all'interno della custom ANN merita una spiegazione. La prima fase è stata proprio quella di dividere il nostro dataset in tre, il train set, il validation set e il test set. Come possiamo notare, la suddivisione tra test set e train set è stata fatta con una politica che segue una divisione 9:1 (90% delle tuple saranno lasciate al train set, il restante 10% al test set). La suddivisione tra train set e validation set invece è diversa in quanto il validation set avrà il 22% di tuple rispetto al train set.

In seguito è stato utilizzato uno scaler. In una rete neurale, uno scaler viene utilizzato per trasformare i dati di input in un formato appropriato per l'addestramento del modello. Lo scaler può essere utilizzato per standardizzare i dati, cioè per rendere la media zero e la deviazione standard unitaria, o per eseguire altre trasformazioni come la normalizzazione. L'utilizzo dello scaler garantisce che tutte le feature dei dati abbiano una scala simile, consentendo al modello di apprendere in modo più efficiente e accurato. In seguit, è stata definita una variabile early_stopping, che entra in gioco qualora, durante l'addestramento della rete neurale, essa non migliora il suo score per un numero di epoche che è stato definito nella variabile stessa.

Viene creato il modello vuoto e ad esso, vengono aggiunti tre layers. I vari layers avranno un numero crescente di neuroni, in particolare la prima ne avrà 32, la seconda 64 e la terza 128. Si andrà di seguito a scalare la loro dimensione prima di ottenere l'output, in modo da avere risultati più accurati alla fine.

Viene, di seguito, compilato il modello e addestrato. Viene calcolato lo score della rete con il metodo 'model.evaluate' e vengono, infine, stampati i risultati

Performance¶

In [167]:
# Print confusion matrix of ann
confusion_matrix = metrics.confusion_matrix(y_test_rete,y_pred)
cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.show()

# Print classification report and other scores
print(classification_report(y_test_rete, y_pred))
print("Recall score:", recall_score(y_test_rete, y_pred, average="macro"))
print("Precision score:", precision_score(y_test_rete, y_pred))
print("f1_score score:", f1_score(y_test_rete, y_pred))
print("Accuracy:", accuracy_score(y_test_rete, y_pred))
              precision    recall  f1-score   support

           0       0.97      0.95      0.96      7025
           1       0.95      0.97      0.96      7128

    accuracy                           0.96     14153
   macro avg       0.96      0.96      0.96     14153
weighted avg       0.96      0.96      0.96     14153

Recall score: 0.9619706555471681
Precision score: 0.9518716577540107
f1_score score: 0.9627626378198462
Accuracy: 0.9620575143079206
In [168]:
set_scores(results_test_neural,"ANN",y_test_rete,y_pred)

MLP¶

Viene utilizzata una rete neurale già esistente di nome MLP, così da confrontare i risultati tra questa rete e la rete neurale custom appena creata

In [169]:
X_train_mlp, X_test_mlp, y_train_mlp, y_test_mlp = train_test_split(df_neural_X, df_neural_Y, test_size=0.33, random_state=42)
In [170]:
MLP = MLPClassifier(max_iter=5000,random_state=42)
MLP.fit(X_train_mlp, y_train_mlp)
Out[170]:
MLPClassifier(max_iter=5000, random_state=42)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
MLPClassifier(max_iter=5000, random_state=42)

Risultati Train set¶

In [171]:
y_train_MLP_predicted = MLP.predict(X_train_mlp)
In [172]:
confusion_matrix = metrics.confusion_matrix(y_train_mlp,y_train_MLP_predicted)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.grid(False)
plt.show()

print(classification_report(y_train_mlp, y_train_MLP_predicted))
              precision    recall  f1-score   support

           0       0.81      0.92      0.86     47408
           1       0.90      0.78      0.84     47413

    accuracy                           0.85     94821
   macro avg       0.86      0.85      0.85     94821
weighted avg       0.86      0.85      0.85     94821

Risultati Test set¶

In [173]:
y_test_MLP_predicted = MLP.predict(X_test_mlp)
In [174]:
confusion_matrix = metrics.confusion_matrix(y_test_mlp,y_test_MLP_predicted)

cm_display = metrics.ConfusionMatrixDisplay(confusion_matrix = confusion_matrix, display_labels = [0, 1])
fig, ax = plt.subplots(figsize=(3, 3))
cm_display.plot(ax=ax, values_format='d')
plt.grid(False)
plt.show()

print(classification_report(y_test_mlp, y_test_MLP_predicted))
              precision    recall  f1-score   support

           0       0.81      0.92      0.86     23354
           1       0.90      0.78      0.84     23349

    accuracy                           0.85     46703
   macro avg       0.86      0.85      0.85     46703
weighted avg       0.86      0.85      0.85     46703

In [175]:
set_scores(results_test_neural,"MLP",y_test_mlp,y_test_MLP_predicted)

Performance delle due Reti Neurali¶

In [176]:
results_test_neural
Out[176]:
accuracy balanced_accuracy precision w_precision recall w_recall f1
ANN 0.962058 0.961971 0.951872 0.962303 0.973906 0.962058 0.962763
MLP 0.849796 0.849788 0.904027 0.856214 0.782646 0.849796 0.838969

Accuracy¶

In [177]:
data_frame = results_test_neural.sort_values('accuracy', ascending=False)

plt.figure(figsize=(6,5))
plt.bar(data_frame.index, data_frame['accuracy'])
fig.show()
plt.ylabel('Accuracy',  fontsize=20)
plt.xlabel('Reti neurali',  fontsize=20)
ax = plt.title('Accuracy delle reti neurali',  fontsize=19)

F1 score¶

In [178]:
data_frame = results_test_neural.sort_values('f1', ascending=False)

plt.figure(figsize=(6,5))
plt.bar(data_frame.index, data_frame['f1'])
fig.show()
plt.ylabel('F1 score',  fontsize=20)
plt.xlabel('Reti neurali',  fontsize=20)
ax = plt.title('F1 score delle reti neurali',  fontsize=19)

Precision¶

In [179]:
data_frame = results_test_neural.sort_values('precision', ascending=False)

plt.figure(figsize=(6,5))
plt.bar(data_frame.index, data_frame['precision'])
fig.show()
plt.ylabel('Precision',  fontsize=20)
plt.xlabel('Reti neurali',  fontsize=20)
ax = plt.title('Precision delle reti neurali',  fontsize=19)

Recall¶

In [180]:
data_frame = results_test_neural.sort_values('recall', ascending=False)

plt.figure(figsize=(6,5))
plt.bar(data_frame.index, data_frame['recall'])
fig.show()
plt.ylabel('Recall',  fontsize=20)
plt.xlabel('Reti neurali',  fontsize=20)
ax = plt.title('Recall delle reti neurali',  fontsize=19)